本文開始前,問大家一個問題,你覺得一份業務代碼,尤其是互聯網業務代碼,都有哪些特點?
我能想到的有這幾點:
- 互聯網業務迭代快,工期緊,導致代碼結構混亂,幾乎沒有代碼注釋和文檔。
- 互聯網人員變動頻繁,很容易接手別人的老項目,新人根本沒時間吃透代碼結構,緊迫的工期又只能讓屎山越堆越大。
- 多人一起開發,每個人的編碼習慣不同,工具類代碼各用個的,業務命名也經常沖突,影響效率。
每當我們新啟動一個代碼倉庫,都是信心滿滿,結構整潔。但是時間越往后,代碼就變得腐敗不堪,技術債務越來越龐大。
這種情況有解決方案嗎?也是有的:
- 組內設計完善的應用架構,讓代碼的腐爛來得慢一些。(當然很難做到完全不腐爛)
- 設計盡量簡單,讓不同層級的開發都能快速看懂并上手開發,而不是在一堆復雜的沒人看懂的代碼上堆更多的屎山。
而COLA,我們今天的主角,就是為了提供一個可落地的業務代碼結構規范,讓你的代碼腐爛的盡可能慢一些,讓團隊的開發效率盡可能快一些。
https://github.com/alibaba/COLA
https://blog.csdn.net/significantfrank/article/details/110934799
SPRING 框架下 如果要做去重,在數據量大的時候會爆ERROR,可改用如下 寫法:
private boolean needReorderCheck(String requestId) {
boolean result = false;
// try(MongoCursor<String> mongoCursor =
// mongoTemplate.getCollection(mongoTemplate.getCollectionName(AccountNumProductLineIndex.class))
// .distinct(KEY, Filters.eq(REQUEST_ID, requestId), String.class)
// .iterator()
// )
try(MongoCursor<Document> mongoCursor =
mongoTemplate.getCollection(mongoTemplate.getCollectionName(AccountNumProductLineIndex.class))
.aggregate(
Arrays.asList(
Aggregates.project(
Projections.fields(
Projections.excludeId(),
Projections.include(KEY),
Projections.include(REQUEST_ID)
)
),
Aggregates.match(Filters.eq(REQUEST_ID, requestId)),
Aggregates.group("$" + KEY)
)
)
.allowDiskUse(true)
.iterator();
)
{
String key = null;
boolean breakMe = false;
LOGGER.info("needReorderCheck.key --> start");
while(mongoCursor.hasNext()) {
if(breakMe) {
mongoCursor.close();
break;
}
Document keyDocument = mongoCursor.next();
key = keyDocument.getString("_id");
// key = mongoCursor.next().getString(KEY);
// LOGGER.info("needReorderCheck.keyDocument --> {}, key --> {}", keyDocument, key);
try(MongoCursor<Document> indexMongoCursor =
mongoTemplate.getCollection(AccountNumProductLineIndex.COLLECTION_NAME)
.find(Filters.and(Filters.eq(REQUEST_ID, requestId), Filters.eq(KEY, key)))
.iterator()
)
{
int preIndex = -1, currentIndex = -1;
Document preIndexDocument = null, currentIndexDocument;
while(indexMongoCursor.hasNext()) {
currentIndexDocument = indexMongoCursor.next();
// System.out.println(currentIndexDocument.toJson());
if(preIndexDocument != null) {
currentIndex = currentIndexDocument.getInteger(INDEX);
preIndex = preIndexDocument.getInteger(INDEX);
if(currentIndex - preIndex > 1) {
indexMongoCursor.close();
breakMe = true;
result = true;
break;
}
}
preIndexDocument = currentIndexDocument;
}
}
}
}
return result;
}
@JsonFormat(shape=JsonFormat.Shape.STRING, pattern="yyyy-MM-dd'T'HH:mm:ss.SSSZ", timezone="America/Phoenix")
private Date date;
https://www.amitph.com/spring-webclient-large-file-download/https://github.com/amitrp/spring-examples/blob/main/spring-webflux-webclient/src/main/java/com/amitph/spring/webclients/service/FileDownloaderWebClientService.javaimport lombok.RequiredArgsConstructor;
import org.springframework.core.io.buffer.DataBuffer;
import org.springframework.core.io.buffer.DataBufferUtils;
import org.springframework.stereotype.Service;
import org.springframework.web.reactive.function.client.WebClient;
import reactor.core.publisher.Flux;
import reactor.core.publisher.Mono;
import java.io.IOException;
import java.nio.file.Files;
import java.nio.file.Path;
import java.nio.file.StandardOpenOption;
import java.util.Objects;
@Service
@RequiredArgsConstructor
public class FileDownloaderWebClientService {
private final WebClient webClient;
/**
* Reads the complete file in-memory. Thus, only useful for very large file
*/
public void downloadUsingByteArray(Path destination) throws IOException {
Mono<byte[]> monoContents = webClient
.get()
.uri("/largefiles/1")
.retrieve()
.bodyToMono(byte[].class);
Files.write(destination, Objects.requireNonNull(monoContents.share().block()),
StandardOpenOption.CREATE);
}
/**
* Reading file using Mono will try to fit the entire file into the DataBuffer.
* Results in exception when the file is larger than the DataBuffer capacity.
*/
public void downloadUsingMono(Path destination) {
Mono<DataBuffer> dataBuffer = webClient
.get()
.uri("/largefiles/1")
.retrieve()
.bodyToMono(DataBuffer.class);
DataBufferUtils.write(dataBuffer, destination,
StandardOpenOption.CREATE)
.share().block();
}
/**
* Having using Flux we can download files of any size safely.
* Optionally, we can configure DataBuffer capacity for better memory utilization.
*/
public void downloadUsingFlux(Path destination) {
Flux<DataBuffer> dataBuffer = webClient
.get()
.uri("/largefiles/1")
.retrieve()
.bodyToFlux(DataBuffer.class);
DataBufferUtils.write(dataBuffer, destination,
StandardOpenOption.CREATE)
.share().block();
}
}