Dec 13 02:24:42.566901 kernel: Linux version 5.15.173-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Dec 12 23:50:37 -00 2024 Dec 13 02:24:42.566913 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:24:42.566920 kernel: BIOS-provided physical RAM map: Dec 13 02:24:42.566924 kernel: BIOS-e820: [mem 0x0000000000000000-0x00000000000997ff] usable Dec 13 02:24:42.566928 kernel: BIOS-e820: [mem 0x0000000000099800-0x000000000009ffff] reserved Dec 13 02:24:42.566931 kernel: BIOS-e820: [mem 0x00000000000e0000-0x00000000000fffff] reserved Dec 13 02:24:42.566936 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003fffffff] usable Dec 13 02:24:42.566940 kernel: BIOS-e820: [mem 0x0000000040000000-0x00000000403fffff] reserved Dec 13 02:24:42.566943 kernel: BIOS-e820: [mem 0x0000000040400000-0x0000000081b25fff] usable Dec 13 02:24:42.566947 kernel: BIOS-e820: [mem 0x0000000081b26000-0x0000000081b26fff] ACPI NVS Dec 13 02:24:42.566952 kernel: BIOS-e820: [mem 0x0000000081b27000-0x0000000081b27fff] reserved Dec 13 02:24:42.566955 kernel: BIOS-e820: [mem 0x0000000081b28000-0x000000008afccfff] usable Dec 13 02:24:42.566959 kernel: BIOS-e820: [mem 0x000000008afcd000-0x000000008c0b1fff] reserved Dec 13 02:24:42.566963 kernel: BIOS-e820: [mem 0x000000008c0b2000-0x000000008c23afff] usable Dec 13 02:24:42.566968 kernel: BIOS-e820: [mem 0x000000008c23b000-0x000000008c66cfff] ACPI NVS Dec 13 02:24:42.566973 kernel: BIOS-e820: [mem 0x000000008c66d000-0x000000008eefefff] reserved Dec 13 02:24:42.566977 kernel: BIOS-e820: [mem 0x000000008eeff000-0x000000008eefffff] usable Dec 13 02:24:42.566981 kernel: BIOS-e820: [mem 0x000000008ef00000-0x000000008fffffff] reserved Dec 13 02:24:42.566985 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 02:24:42.566990 kernel: BIOS-e820: [mem 0x00000000fe000000-0x00000000fe010fff] reserved Dec 13 02:24:42.566994 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec00fff] reserved Dec 13 02:24:42.566998 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Dec 13 02:24:42.567002 kernel: BIOS-e820: [mem 0x00000000ff000000-0x00000000ffffffff] reserved Dec 13 02:24:42.567006 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000086effffff] usable Dec 13 02:24:42.567010 kernel: NX (Execute Disable) protection: active Dec 13 02:24:42.567014 kernel: SMBIOS 3.2.1 present. Dec 13 02:24:42.567019 kernel: DMI: Supermicro SYS-5019C-MR-PH004/X11SCM-F, BIOS 1.9 09/16/2022 Dec 13 02:24:42.567023 kernel: tsc: Detected 3400.000 MHz processor Dec 13 02:24:42.567027 kernel: tsc: Detected 3399.906 MHz TSC Dec 13 02:24:42.567032 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 02:24:42.567036 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 02:24:42.567041 kernel: last_pfn = 0x86f000 max_arch_pfn = 0x400000000 Dec 13 02:24:42.567045 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 02:24:42.567049 kernel: last_pfn = 0x8ef00 max_arch_pfn = 0x400000000 Dec 13 02:24:42.567054 kernel: Using GB pages for direct mapping Dec 13 02:24:42.567058 kernel: ACPI: Early table checksum verification disabled Dec 13 02:24:42.567063 kernel: ACPI: RSDP 0x00000000000F05B0 000024 (v02 SUPERM) Dec 13 02:24:42.567067 kernel: ACPI: XSDT 0x000000008C54E0C8 00010C (v01 SUPERM SUPERM 01072009 AMI 00010013) Dec 13 02:24:42.567072 kernel: ACPI: FACP 0x000000008C58A670 000114 (v06 01072009 AMI 00010013) Dec 13 02:24:42.567076 kernel: ACPI: DSDT 0x000000008C54E268 03C404 (v02 SUPERM SMCI--MB 01072009 INTL 20160527) Dec 13 02:24:42.567082 kernel: ACPI: FACS 0x000000008C66CF80 000040 Dec 13 02:24:42.567087 kernel: ACPI: APIC 0x000000008C58A788 00012C (v04 01072009 AMI 00010013) Dec 13 02:24:42.567092 kernel: ACPI: FPDT 0x000000008C58A8B8 000044 (v01 01072009 AMI 00010013) Dec 13 02:24:42.567097 kernel: ACPI: FIDT 0x000000008C58A900 00009C (v01 SUPERM SMCI--MB 01072009 AMI 00010013) Dec 13 02:24:42.567101 kernel: ACPI: MCFG 0x000000008C58A9A0 00003C (v01 SUPERM SMCI--MB 01072009 MSFT 00000097) Dec 13 02:24:42.567106 kernel: ACPI: SPMI 0x000000008C58A9E0 000041 (v05 SUPERM SMCI--MB 00000000 AMI. 00000000) Dec 13 02:24:42.567111 kernel: ACPI: SSDT 0x000000008C58AA28 001B1C (v02 CpuRef CpuSsdt 00003000 INTL 20160527) Dec 13 02:24:42.567115 kernel: ACPI: SSDT 0x000000008C58C548 0031C6 (v02 SaSsdt SaSsdt 00003000 INTL 20160527) Dec 13 02:24:42.567120 kernel: ACPI: SSDT 0x000000008C58F710 00232B (v02 PegSsd PegSsdt 00001000 INTL 20160527) Dec 13 02:24:42.567124 kernel: ACPI: HPET 0x000000008C591A40 000038 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:24:42.567130 kernel: ACPI: SSDT 0x000000008C591A78 000FAE (v02 SUPERM Ther_Rvp 00001000 INTL 20160527) Dec 13 02:24:42.567134 kernel: ACPI: SSDT 0x000000008C592A28 0008F4 (v02 INTEL xh_mossb 00000000 INTL 20160527) Dec 13 02:24:42.567139 kernel: ACPI: UEFI 0x000000008C593320 000042 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:24:42.567144 kernel: ACPI: LPIT 0x000000008C593368 000094 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:24:42.567148 kernel: ACPI: SSDT 0x000000008C593400 0027DE (v02 SUPERM PtidDevc 00001000 INTL 20160527) Dec 13 02:24:42.567153 kernel: ACPI: SSDT 0x000000008C595BE0 0014E2 (v02 SUPERM TbtTypeC 00000000 INTL 20160527) Dec 13 02:24:42.567158 kernel: ACPI: DBGP 0x000000008C5970C8 000034 (v01 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:24:42.567162 kernel: ACPI: DBG2 0x000000008C597100 000054 (v00 SUPERM SMCI--MB 00000002 01000013) Dec 13 02:24:42.567167 kernel: ACPI: SSDT 0x000000008C597158 001B67 (v02 SUPERM UsbCTabl 00001000 INTL 20160527) Dec 13 02:24:42.567172 kernel: ACPI: DMAR 0x000000008C598CC0 000070 (v01 INTEL EDK2 00000002 01000013) Dec 13 02:24:42.567177 kernel: ACPI: SSDT 0x000000008C598D30 000144 (v02 Intel ADebTabl 00001000 INTL 20160527) Dec 13 02:24:42.567181 kernel: ACPI: TPM2 0x000000008C598E78 000034 (v04 SUPERM SMCI--MB 00000001 AMI 00000000) Dec 13 02:24:42.567186 kernel: ACPI: SSDT 0x000000008C598EB0 000D8F (v02 INTEL SpsNm 00000002 INTL 20160527) Dec 13 02:24:42.567191 kernel: ACPI: WSMT 0x000000008C599C40 000028 (v01 SUPERM 01072009 AMI 00010013) Dec 13 02:24:42.567195 kernel: ACPI: EINJ 0x000000008C599C68 000130 (v01 AMI AMI.EINJ 00000000 AMI. 00000000) Dec 13 02:24:42.567200 kernel: ACPI: ERST 0x000000008C599D98 000230 (v01 AMIER AMI.ERST 00000000 AMI. 00000000) Dec 13 02:24:42.567204 kernel: ACPI: BERT 0x000000008C599FC8 000030 (v01 AMI AMI.BERT 00000000 AMI. 00000000) Dec 13 02:24:42.567210 kernel: ACPI: HEST 0x000000008C599FF8 00027C (v01 AMI AMI.HEST 00000000 AMI. 00000000) Dec 13 02:24:42.567214 kernel: ACPI: SSDT 0x000000008C59A278 000162 (v01 SUPERM SMCCDN 00000000 INTL 20181221) Dec 13 02:24:42.567219 kernel: ACPI: Reserving FACP table memory at [mem 0x8c58a670-0x8c58a783] Dec 13 02:24:42.567224 kernel: ACPI: Reserving DSDT table memory at [mem 0x8c54e268-0x8c58a66b] Dec 13 02:24:42.567228 kernel: ACPI: Reserving FACS table memory at [mem 0x8c66cf80-0x8c66cfbf] Dec 13 02:24:42.567233 kernel: ACPI: Reserving APIC table memory at [mem 0x8c58a788-0x8c58a8b3] Dec 13 02:24:42.567237 kernel: ACPI: Reserving FPDT table memory at [mem 0x8c58a8b8-0x8c58a8fb] Dec 13 02:24:42.567242 kernel: ACPI: Reserving FIDT table memory at [mem 0x8c58a900-0x8c58a99b] Dec 13 02:24:42.567247 kernel: ACPI: Reserving MCFG table memory at [mem 0x8c58a9a0-0x8c58a9db] Dec 13 02:24:42.567252 kernel: ACPI: Reserving SPMI table memory at [mem 0x8c58a9e0-0x8c58aa20] Dec 13 02:24:42.567257 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58aa28-0x8c58c543] Dec 13 02:24:42.567261 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58c548-0x8c58f70d] Dec 13 02:24:42.567266 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c58f710-0x8c591a3a] Dec 13 02:24:42.567270 kernel: ACPI: Reserving HPET table memory at [mem 0x8c591a40-0x8c591a77] Dec 13 02:24:42.567275 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c591a78-0x8c592a25] Dec 13 02:24:42.567279 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c592a28-0x8c59331b] Dec 13 02:24:42.567284 kernel: ACPI: Reserving UEFI table memory at [mem 0x8c593320-0x8c593361] Dec 13 02:24:42.567292 kernel: ACPI: Reserving LPIT table memory at [mem 0x8c593368-0x8c5933fb] Dec 13 02:24:42.567297 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c593400-0x8c595bdd] Dec 13 02:24:42.567301 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c595be0-0x8c5970c1] Dec 13 02:24:42.567306 kernel: ACPI: Reserving DBGP table memory at [mem 0x8c5970c8-0x8c5970fb] Dec 13 02:24:42.567311 kernel: ACPI: Reserving DBG2 table memory at [mem 0x8c597100-0x8c597153] Dec 13 02:24:42.567315 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c597158-0x8c598cbe] Dec 13 02:24:42.567320 kernel: ACPI: Reserving DMAR table memory at [mem 0x8c598cc0-0x8c598d2f] Dec 13 02:24:42.567324 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598d30-0x8c598e73] Dec 13 02:24:42.567329 kernel: ACPI: Reserving TPM2 table memory at [mem 0x8c598e78-0x8c598eab] Dec 13 02:24:42.567334 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c598eb0-0x8c599c3e] Dec 13 02:24:42.567339 kernel: ACPI: Reserving WSMT table memory at [mem 0x8c599c40-0x8c599c67] Dec 13 02:24:42.567343 kernel: ACPI: Reserving EINJ table memory at [mem 0x8c599c68-0x8c599d97] Dec 13 02:24:42.567348 kernel: ACPI: Reserving ERST table memory at [mem 0x8c599d98-0x8c599fc7] Dec 13 02:24:42.567353 kernel: ACPI: Reserving BERT table memory at [mem 0x8c599fc8-0x8c599ff7] Dec 13 02:24:42.567357 kernel: ACPI: Reserving HEST table memory at [mem 0x8c599ff8-0x8c59a273] Dec 13 02:24:42.567362 kernel: ACPI: Reserving SSDT table memory at [mem 0x8c59a278-0x8c59a3d9] Dec 13 02:24:42.567366 kernel: No NUMA configuration found Dec 13 02:24:42.567371 kernel: Faking a node at [mem 0x0000000000000000-0x000000086effffff] Dec 13 02:24:42.567376 kernel: NODE_DATA(0) allocated [mem 0x86effa000-0x86effffff] Dec 13 02:24:42.567381 kernel: Zone ranges: Dec 13 02:24:42.567386 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 02:24:42.567391 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Dec 13 02:24:42.567395 kernel: Normal [mem 0x0000000100000000-0x000000086effffff] Dec 13 02:24:42.567400 kernel: Movable zone start for each node Dec 13 02:24:42.567404 kernel: Early memory node ranges Dec 13 02:24:42.567409 kernel: node 0: [mem 0x0000000000001000-0x0000000000098fff] Dec 13 02:24:42.567414 kernel: node 0: [mem 0x0000000000100000-0x000000003fffffff] Dec 13 02:24:42.567418 kernel: node 0: [mem 0x0000000040400000-0x0000000081b25fff] Dec 13 02:24:42.567424 kernel: node 0: [mem 0x0000000081b28000-0x000000008afccfff] Dec 13 02:24:42.567428 kernel: node 0: [mem 0x000000008c0b2000-0x000000008c23afff] Dec 13 02:24:42.567433 kernel: node 0: [mem 0x000000008eeff000-0x000000008eefffff] Dec 13 02:24:42.567437 kernel: node 0: [mem 0x0000000100000000-0x000000086effffff] Dec 13 02:24:42.567442 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000086effffff] Dec 13 02:24:42.567447 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 02:24:42.567454 kernel: On node 0, zone DMA: 103 pages in unavailable ranges Dec 13 02:24:42.567460 kernel: On node 0, zone DMA32: 1024 pages in unavailable ranges Dec 13 02:24:42.567465 kernel: On node 0, zone DMA32: 2 pages in unavailable ranges Dec 13 02:24:42.567470 kernel: On node 0, zone DMA32: 4325 pages in unavailable ranges Dec 13 02:24:42.567476 kernel: On node 0, zone DMA32: 11460 pages in unavailable ranges Dec 13 02:24:42.567481 kernel: On node 0, zone Normal: 4352 pages in unavailable ranges Dec 13 02:24:42.567486 kernel: On node 0, zone Normal: 4096 pages in unavailable ranges Dec 13 02:24:42.567491 kernel: ACPI: PM-Timer IO Port: 0x1808 Dec 13 02:24:42.567496 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Dec 13 02:24:42.567501 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Dec 13 02:24:42.567505 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Dec 13 02:24:42.567511 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Dec 13 02:24:42.567516 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Dec 13 02:24:42.567521 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Dec 13 02:24:42.567526 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Dec 13 02:24:42.567531 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Dec 13 02:24:42.567536 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Dec 13 02:24:42.567541 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Dec 13 02:24:42.567546 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Dec 13 02:24:42.567550 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Dec 13 02:24:42.567556 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Dec 13 02:24:42.567561 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Dec 13 02:24:42.567566 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Dec 13 02:24:42.567571 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Dec 13 02:24:42.567576 kernel: IOAPIC[0]: apic_id 2, version 32, address 0xfec00000, GSI 0-119 Dec 13 02:24:42.567581 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 02:24:42.567586 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 02:24:42.567591 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 02:24:42.567596 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 02:24:42.567601 kernel: TSC deadline timer available Dec 13 02:24:42.567606 kernel: smpboot: Allowing 16 CPUs, 0 hotplug CPUs Dec 13 02:24:42.567611 kernel: [mem 0x90000000-0xdfffffff] available for PCI devices Dec 13 02:24:42.567616 kernel: Booting paravirtualized kernel on bare hardware Dec 13 02:24:42.567621 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 02:24:42.567626 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:16 nr_node_ids:1 Dec 13 02:24:42.567631 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u262144 Dec 13 02:24:42.567636 kernel: pcpu-alloc: s188696 r8192 d32488 u262144 alloc=1*2097152 Dec 13 02:24:42.567641 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 02:24:42.567646 kernel: Built 1 zonelists, mobility grouping on. Total pages: 8232415 Dec 13 02:24:42.567651 kernel: Policy zone: Normal Dec 13 02:24:42.567657 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:24:42.567662 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 02:24:42.567667 kernel: Dentry cache hash table entries: 4194304 (order: 13, 33554432 bytes, linear) Dec 13 02:24:42.567672 kernel: Inode-cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Dec 13 02:24:42.567677 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 02:24:42.567682 kernel: Memory: 32722604K/33452980K available (12294K kernel code, 2275K rwdata, 13716K rodata, 47476K init, 4108K bss, 730116K reserved, 0K cma-reserved) Dec 13 02:24:42.567688 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 02:24:42.567693 kernel: ftrace: allocating 34549 entries in 135 pages Dec 13 02:24:42.567698 kernel: ftrace: allocated 135 pages with 4 groups Dec 13 02:24:42.567703 kernel: rcu: Hierarchical RCU implementation. Dec 13 02:24:42.567708 kernel: rcu: RCU event tracing is enabled. Dec 13 02:24:42.567713 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 02:24:42.567718 kernel: Rude variant of Tasks RCU enabled. Dec 13 02:24:42.567723 kernel: Tracing variant of Tasks RCU enabled. Dec 13 02:24:42.567729 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 02:24:42.567734 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 02:24:42.567739 kernel: NR_IRQS: 33024, nr_irqs: 2184, preallocated irqs: 16 Dec 13 02:24:42.567744 kernel: random: crng init done Dec 13 02:24:42.567749 kernel: Console: colour dummy device 80x25 Dec 13 02:24:42.567754 kernel: printk: console [tty0] enabled Dec 13 02:24:42.567759 kernel: printk: console [ttyS1] enabled Dec 13 02:24:42.567764 kernel: ACPI: Core revision 20210730 Dec 13 02:24:42.567769 kernel: hpet: HPET dysfunctional in PC10. Force disabled. Dec 13 02:24:42.567774 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 02:24:42.567779 kernel: DMAR: Host address width 39 Dec 13 02:24:42.567784 kernel: DMAR: DRHD base: 0x000000fed91000 flags: 0x1 Dec 13 02:24:42.567789 kernel: DMAR: dmar0: reg_base_addr fed91000 ver 1:0 cap d2008c40660462 ecap f050da Dec 13 02:24:42.567794 kernel: DMAR: RMRR base: 0x0000008cf18000 end: 0x0000008d161fff Dec 13 02:24:42.567799 kernel: DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 0 Dec 13 02:24:42.567804 kernel: DMAR-IR: HPET id 0 under DRHD base 0xfed91000 Dec 13 02:24:42.567809 kernel: DMAR-IR: Queued invalidation will be enabled to support x2apic and Intr-remapping. Dec 13 02:24:42.567814 kernel: DMAR-IR: Enabled IRQ remapping in x2apic mode Dec 13 02:24:42.567819 kernel: x2apic enabled Dec 13 02:24:42.567825 kernel: Switched APIC routing to cluster x2apic. Dec 13 02:24:42.567830 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3101f59f5e6, max_idle_ns: 440795259996 ns Dec 13 02:24:42.567835 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 6799.81 BogoMIPS (lpj=3399906) Dec 13 02:24:42.567840 kernel: CPU0: Thermal monitoring enabled (TM1) Dec 13 02:24:42.567845 kernel: process: using mwait in idle threads Dec 13 02:24:42.567850 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Dec 13 02:24:42.567854 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Dec 13 02:24:42.567859 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 02:24:42.567864 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 02:24:42.567870 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Dec 13 02:24:42.567875 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Dec 13 02:24:42.567880 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Dec 13 02:24:42.567885 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 02:24:42.567889 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Dec 13 02:24:42.567894 kernel: RETBleed: Mitigation: Enhanced IBRS Dec 13 02:24:42.567899 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 02:24:42.567904 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Dec 13 02:24:42.567909 kernel: TAA: Mitigation: TSX disabled Dec 13 02:24:42.567914 kernel: MMIO Stale Data: Mitigation: Clear CPU buffers Dec 13 02:24:42.567919 kernel: SRBDS: Mitigation: Microcode Dec 13 02:24:42.567924 kernel: GDS: Vulnerable: No microcode Dec 13 02:24:42.567929 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 02:24:42.567934 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 02:24:42.567939 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 02:24:42.567944 kernel: x86/fpu: Supporting XSAVE feature 0x008: 'MPX bounds registers' Dec 13 02:24:42.567949 kernel: x86/fpu: Supporting XSAVE feature 0x010: 'MPX CSR' Dec 13 02:24:42.567954 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 02:24:42.567958 kernel: x86/fpu: xstate_offset[3]: 832, xstate_sizes[3]: 64 Dec 13 02:24:42.567963 kernel: x86/fpu: xstate_offset[4]: 896, xstate_sizes[4]: 64 Dec 13 02:24:42.567968 kernel: x86/fpu: Enabled xstate features 0x1f, context size is 960 bytes, using 'compacted' format. Dec 13 02:24:42.567973 kernel: Freeing SMP alternatives memory: 32K Dec 13 02:24:42.567979 kernel: pid_max: default: 32768 minimum: 301 Dec 13 02:24:42.567983 kernel: LSM: Security Framework initializing Dec 13 02:24:42.567988 kernel: SELinux: Initializing. Dec 13 02:24:42.567993 kernel: Mount-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:24:42.567998 kernel: Mountpoint-cache hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 02:24:42.568003 kernel: smpboot: Estimated ratio of average max frequency by base frequency (times 1024): 1445 Dec 13 02:24:42.568008 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Dec 13 02:24:42.568013 kernel: Performance Events: PEBS fmt3+, Skylake events, 32-deep LBR, full-width counters, Intel PMU driver. Dec 13 02:24:42.568018 kernel: ... version: 4 Dec 13 02:24:42.568023 kernel: ... bit width: 48 Dec 13 02:24:42.568028 kernel: ... generic registers: 4 Dec 13 02:24:42.568033 kernel: ... value mask: 0000ffffffffffff Dec 13 02:24:42.568038 kernel: ... max period: 00007fffffffffff Dec 13 02:24:42.568043 kernel: ... fixed-purpose events: 3 Dec 13 02:24:42.568048 kernel: ... event mask: 000000070000000f Dec 13 02:24:42.568053 kernel: signal: max sigframe size: 2032 Dec 13 02:24:42.568058 kernel: rcu: Hierarchical SRCU implementation. Dec 13 02:24:42.568063 kernel: NMI watchdog: Enabled. Permanently consumes one hw-PMU counter. Dec 13 02:24:42.568068 kernel: smp: Bringing up secondary CPUs ... Dec 13 02:24:42.568073 kernel: x86: Booting SMP configuration: Dec 13 02:24:42.568078 kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 #8 Dec 13 02:24:42.568083 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Dec 13 02:24:42.568089 kernel: #9 #10 #11 #12 #13 #14 #15 Dec 13 02:24:42.568093 kernel: smp: Brought up 1 node, 16 CPUs Dec 13 02:24:42.568098 kernel: smpboot: Max logical packages: 1 Dec 13 02:24:42.568103 kernel: smpboot: Total of 16 processors activated (108796.99 BogoMIPS) Dec 13 02:24:42.568108 kernel: devtmpfs: initialized Dec 13 02:24:42.568113 kernel: x86/mm: Memory block size: 128MB Dec 13 02:24:42.568118 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x81b26000-0x81b26fff] (4096 bytes) Dec 13 02:24:42.568124 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x8c23b000-0x8c66cfff] (4399104 bytes) Dec 13 02:24:42.568129 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 02:24:42.568134 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 02:24:42.568139 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 02:24:42.568144 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 02:24:42.568149 kernel: audit: initializing netlink subsys (disabled) Dec 13 02:24:42.568154 kernel: audit: type=2000 audit(1734056676.041:1): state=initialized audit_enabled=0 res=1 Dec 13 02:24:42.568159 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 02:24:42.568164 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 02:24:42.568169 kernel: cpuidle: using governor menu Dec 13 02:24:42.568174 kernel: ACPI: bus type PCI registered Dec 13 02:24:42.568179 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 02:24:42.568184 kernel: dca service started, version 1.12.1 Dec 13 02:24:42.568189 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 02:24:42.568194 kernel: PCI: MMCONFIG at [mem 0xe0000000-0xefffffff] reserved in E820 Dec 13 02:24:42.568199 kernel: PCI: Using configuration type 1 for base access Dec 13 02:24:42.568204 kernel: ENERGY_PERF_BIAS: Set to 'normal', was 'performance' Dec 13 02:24:42.568209 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 02:24:42.568214 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 02:24:42.568219 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 02:24:42.568224 kernel: ACPI: Added _OSI(Module Device) Dec 13 02:24:42.568229 kernel: ACPI: Added _OSI(Processor Device) Dec 13 02:24:42.568234 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 02:24:42.568239 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 02:24:42.568244 kernel: ACPI: Added _OSI(Linux-Dell-Video) Dec 13 02:24:42.568249 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Dec 13 02:24:42.568254 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Dec 13 02:24:42.568259 kernel: ACPI: 12 ACPI AML tables successfully acquired and loaded Dec 13 02:24:42.568264 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:24:42.568269 kernel: ACPI: SSDT 0xFFFF9B0D80218300 0000F4 (v02 PmRef Cpu0Psd 00003000 INTL 20160527) Dec 13 02:24:42.568274 kernel: ACPI: \_SB_.PR00: _OSC native thermal LVT Acked Dec 13 02:24:42.568279 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:24:42.568284 kernel: ACPI: SSDT 0xFFFF9B0D81AE3000 000400 (v02 PmRef Cpu0Cst 00003001 INTL 20160527) Dec 13 02:24:42.568291 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:24:42.568296 kernel: ACPI: SSDT 0xFFFF9B0D81A5C000 000683 (v02 PmRef Cpu0Ist 00003000 INTL 20160527) Dec 13 02:24:42.568301 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:24:42.568307 kernel: ACPI: SSDT 0xFFFF9B0D81B4E800 0005FC (v02 PmRef ApIst 00003000 INTL 20160527) Dec 13 02:24:42.568312 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:24:42.568330 kernel: ACPI: SSDT 0xFFFF9B0D80148000 000AB0 (v02 PmRef ApPsd 00003000 INTL 20160527) Dec 13 02:24:42.568334 kernel: ACPI: Dynamic OEM Table Load: Dec 13 02:24:42.568340 kernel: ACPI: SSDT 0xFFFF9B0D81AE6000 00030A (v02 PmRef ApCst 00003000 INTL 20160527) Dec 13 02:24:42.568344 kernel: ACPI: Interpreter enabled Dec 13 02:24:42.568349 kernel: ACPI: PM: (supports S0 S5) Dec 13 02:24:42.568354 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 02:24:42.568359 kernel: HEST: Enabling Firmware First mode for corrected errors. Dec 13 02:24:42.568364 kernel: mce: [Firmware Bug]: Ignoring request to disable invalid MCA bank 14. Dec 13 02:24:42.568369 kernel: HEST: Table parsing has been initialized. Dec 13 02:24:42.568374 kernel: GHES: APEI firmware first mode is enabled by APEI bit and WHEA _OSC. Dec 13 02:24:42.568379 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 02:24:42.568384 kernel: ACPI: Enabled 9 GPEs in block 00 to 7F Dec 13 02:24:42.568389 kernel: ACPI: PM: Power Resource [USBC] Dec 13 02:24:42.568394 kernel: ACPI: PM: Power Resource [V0PR] Dec 13 02:24:42.568398 kernel: ACPI: PM: Power Resource [V1PR] Dec 13 02:24:42.568403 kernel: ACPI: PM: Power Resource [V2PR] Dec 13 02:24:42.568408 kernel: ACPI: PM: Power Resource [WRST] Dec 13 02:24:42.568413 kernel: ACPI: PM: Power Resource [FN00] Dec 13 02:24:42.568418 kernel: ACPI: PM: Power Resource [FN01] Dec 13 02:24:42.568423 kernel: ACPI: PM: Power Resource [FN02] Dec 13 02:24:42.568428 kernel: ACPI: PM: Power Resource [FN03] Dec 13 02:24:42.568433 kernel: ACPI: PM: Power Resource [FN04] Dec 13 02:24:42.568437 kernel: ACPI: PM: Power Resource [PIN] Dec 13 02:24:42.568442 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-fe]) Dec 13 02:24:42.568505 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 02:24:42.568553 kernel: acpi PNP0A08:00: _OSC: platform does not support [AER] Dec 13 02:24:42.568594 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability LTR] Dec 13 02:24:42.568601 kernel: PCI host bridge to bus 0000:00 Dec 13 02:24:42.568646 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 02:24:42.568685 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 02:24:42.568722 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 02:24:42.568759 kernel: pci_bus 0000:00: root bus resource [mem 0x90000000-0xdfffffff window] Dec 13 02:24:42.568797 kernel: pci_bus 0000:00: root bus resource [mem 0xfc800000-0xfe7fffff window] Dec 13 02:24:42.568834 kernel: pci_bus 0000:00: root bus resource [bus 00-fe] Dec 13 02:24:42.568885 kernel: pci 0000:00:00.0: [8086:3e31] type 00 class 0x060000 Dec 13 02:24:42.568932 kernel: pci 0000:00:01.0: [8086:1901] type 01 class 0x060400 Dec 13 02:24:42.568976 kernel: pci 0000:00:01.0: PME# supported from D0 D3hot D3cold Dec 13 02:24:42.569023 kernel: pci 0000:00:08.0: [8086:1911] type 00 class 0x088000 Dec 13 02:24:42.569067 kernel: pci 0000:00:08.0: reg 0x10: [mem 0x9551f000-0x9551ffff 64bit] Dec 13 02:24:42.569112 kernel: pci 0000:00:12.0: [8086:a379] type 00 class 0x118000 Dec 13 02:24:42.569155 kernel: pci 0000:00:12.0: reg 0x10: [mem 0x9551e000-0x9551efff 64bit] Dec 13 02:24:42.569203 kernel: pci 0000:00:14.0: [8086:a36d] type 00 class 0x0c0330 Dec 13 02:24:42.569245 kernel: pci 0000:00:14.0: reg 0x10: [mem 0x95500000-0x9550ffff 64bit] Dec 13 02:24:42.569291 kernel: pci 0000:00:14.0: PME# supported from D3hot D3cold Dec 13 02:24:42.569339 kernel: pci 0000:00:14.2: [8086:a36f] type 00 class 0x050000 Dec 13 02:24:42.569383 kernel: pci 0000:00:14.2: reg 0x10: [mem 0x95512000-0x95513fff 64bit] Dec 13 02:24:42.569424 kernel: pci 0000:00:14.2: reg 0x18: [mem 0x9551d000-0x9551dfff 64bit] Dec 13 02:24:42.569470 kernel: pci 0000:00:15.0: [8086:a368] type 00 class 0x0c8000 Dec 13 02:24:42.569511 kernel: pci 0000:00:15.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 02:24:42.569557 kernel: pci 0000:00:15.1: [8086:a369] type 00 class 0x0c8000 Dec 13 02:24:42.569601 kernel: pci 0000:00:15.1: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 02:24:42.569645 kernel: pci 0000:00:16.0: [8086:a360] type 00 class 0x078000 Dec 13 02:24:42.569687 kernel: pci 0000:00:16.0: reg 0x10: [mem 0x9551a000-0x9551afff 64bit] Dec 13 02:24:42.569728 kernel: pci 0000:00:16.0: PME# supported from D3hot Dec 13 02:24:42.569773 kernel: pci 0000:00:16.1: [8086:a361] type 00 class 0x078000 Dec 13 02:24:42.569814 kernel: pci 0000:00:16.1: reg 0x10: [mem 0x95519000-0x95519fff 64bit] Dec 13 02:24:42.569856 kernel: pci 0000:00:16.1: PME# supported from D3hot Dec 13 02:24:42.569904 kernel: pci 0000:00:16.4: [8086:a364] type 00 class 0x078000 Dec 13 02:24:42.569946 kernel: pci 0000:00:16.4: reg 0x10: [mem 0x95518000-0x95518fff 64bit] Dec 13 02:24:42.569988 kernel: pci 0000:00:16.4: PME# supported from D3hot Dec 13 02:24:42.570031 kernel: pci 0000:00:17.0: [8086:a352] type 00 class 0x010601 Dec 13 02:24:42.570073 kernel: pci 0000:00:17.0: reg 0x10: [mem 0x95510000-0x95511fff] Dec 13 02:24:42.570116 kernel: pci 0000:00:17.0: reg 0x14: [mem 0x95517000-0x955170ff] Dec 13 02:24:42.570163 kernel: pci 0000:00:17.0: reg 0x18: [io 0x6050-0x6057] Dec 13 02:24:42.570206 kernel: pci 0000:00:17.0: reg 0x1c: [io 0x6040-0x6043] Dec 13 02:24:42.570247 kernel: pci 0000:00:17.0: reg 0x20: [io 0x6020-0x603f] Dec 13 02:24:42.570291 kernel: pci 0000:00:17.0: reg 0x24: [mem 0x95516000-0x955167ff] Dec 13 02:24:42.570333 kernel: pci 0000:00:17.0: PME# supported from D3hot Dec 13 02:24:42.570380 kernel: pci 0000:00:1b.0: [8086:a340] type 01 class 0x060400 Dec 13 02:24:42.570423 kernel: pci 0000:00:1b.0: PME# supported from D0 D3hot D3cold Dec 13 02:24:42.570470 kernel: pci 0000:00:1b.4: [8086:a32c] type 01 class 0x060400 Dec 13 02:24:42.570513 kernel: pci 0000:00:1b.4: PME# supported from D0 D3hot D3cold Dec 13 02:24:42.570560 kernel: pci 0000:00:1b.5: [8086:a32d] type 01 class 0x060400 Dec 13 02:24:42.570602 kernel: pci 0000:00:1b.5: PME# supported from D0 D3hot D3cold Dec 13 02:24:42.570647 kernel: pci 0000:00:1c.0: [8086:a338] type 01 class 0x060400 Dec 13 02:24:42.570690 kernel: pci 0000:00:1c.0: PME# supported from D0 D3hot D3cold Dec 13 02:24:42.570740 kernel: pci 0000:00:1c.3: [8086:a33b] type 01 class 0x060400 Dec 13 02:24:42.570785 kernel: pci 0000:00:1c.3: PME# supported from D0 D3hot D3cold Dec 13 02:24:42.570831 kernel: pci 0000:00:1e.0: [8086:a328] type 00 class 0x078000 Dec 13 02:24:42.570875 kernel: pci 0000:00:1e.0: reg 0x10: [mem 0x00000000-0x00000fff 64bit] Dec 13 02:24:42.570922 kernel: pci 0000:00:1f.0: [8086:a309] type 00 class 0x060100 Dec 13 02:24:42.570969 kernel: pci 0000:00:1f.4: [8086:a323] type 00 class 0x0c0500 Dec 13 02:24:42.571012 kernel: pci 0000:00:1f.4: reg 0x10: [mem 0x95514000-0x955140ff 64bit] Dec 13 02:24:42.571054 kernel: pci 0000:00:1f.4: reg 0x20: [io 0xefa0-0xefbf] Dec 13 02:24:42.571100 kernel: pci 0000:00:1f.5: [8086:a324] type 00 class 0x0c8000 Dec 13 02:24:42.571143 kernel: pci 0000:00:1f.5: reg 0x10: [mem 0xfe010000-0xfe010fff] Dec 13 02:24:42.571193 kernel: pci 0000:01:00.0: [15b3:1015] type 00 class 0x020000 Dec 13 02:24:42.571238 kernel: pci 0000:01:00.0: reg 0x10: [mem 0x92000000-0x93ffffff 64bit pref] Dec 13 02:24:42.571282 kernel: pci 0000:01:00.0: reg 0x30: [mem 0x95200000-0x952fffff pref] Dec 13 02:24:42.571327 kernel: pci 0000:01:00.0: PME# supported from D3cold Dec 13 02:24:42.571371 kernel: pci 0000:01:00.0: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 02:24:42.571413 kernel: pci 0000:01:00.0: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 02:24:42.571462 kernel: pci 0000:01:00.1: [15b3:1015] type 00 class 0x020000 Dec 13 02:24:42.571507 kernel: pci 0000:01:00.1: reg 0x10: [mem 0x90000000-0x91ffffff 64bit pref] Dec 13 02:24:42.571551 kernel: pci 0000:01:00.1: reg 0x30: [mem 0x95100000-0x951fffff pref] Dec 13 02:24:42.571594 kernel: pci 0000:01:00.1: PME# supported from D3cold Dec 13 02:24:42.571637 kernel: pci 0000:01:00.1: reg 0x1a4: [mem 0x00000000-0x000fffff 64bit pref] Dec 13 02:24:42.571680 kernel: pci 0000:01:00.1: VF(n) BAR0 space: [mem 0x00000000-0x007fffff 64bit pref] (contains BAR0 for 8 VFs) Dec 13 02:24:42.571722 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 02:24:42.571765 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 02:24:42.571808 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 02:24:42.571851 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 02:24:42.571900 kernel: pci 0000:03:00.0: working around ROM BAR overlap defect Dec 13 02:24:42.571945 kernel: pci 0000:03:00.0: [8086:1533] type 00 class 0x020000 Dec 13 02:24:42.571988 kernel: pci 0000:03:00.0: reg 0x10: [mem 0x95400000-0x9547ffff] Dec 13 02:24:42.572031 kernel: pci 0000:03:00.0: reg 0x18: [io 0x5000-0x501f] Dec 13 02:24:42.572074 kernel: pci 0000:03:00.0: reg 0x1c: [mem 0x95480000-0x95483fff] Dec 13 02:24:42.572118 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Dec 13 02:24:42.572163 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 02:24:42.572204 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 02:24:42.572247 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 02:24:42.572296 kernel: pci 0000:04:00.0: working around ROM BAR overlap defect Dec 13 02:24:42.572341 kernel: pci 0000:04:00.0: [8086:1533] type 00 class 0x020000 Dec 13 02:24:42.572383 kernel: pci 0000:04:00.0: reg 0x10: [mem 0x95300000-0x9537ffff] Dec 13 02:24:42.572426 kernel: pci 0000:04:00.0: reg 0x18: [io 0x4000-0x401f] Dec 13 02:24:42.572472 kernel: pci 0000:04:00.0: reg 0x1c: [mem 0x95380000-0x95383fff] Dec 13 02:24:42.572516 kernel: pci 0000:04:00.0: PME# supported from D0 D3hot D3cold Dec 13 02:24:42.572559 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 02:24:42.572600 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 02:24:42.572643 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 02:24:42.572685 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 02:24:42.572732 kernel: pci 0000:06:00.0: [1a03:1150] type 01 class 0x060400 Dec 13 02:24:42.572776 kernel: pci 0000:06:00.0: enabling Extended Tags Dec 13 02:24:42.572821 kernel: pci 0000:06:00.0: supports D1 D2 Dec 13 02:24:42.572864 kernel: pci 0000:06:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 02:24:42.572939 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 02:24:42.573001 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 02:24:42.573043 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 02:24:42.573090 kernel: pci_bus 0000:07: extended config space not accessible Dec 13 02:24:42.573139 kernel: pci 0000:07:00.0: [1a03:2000] type 00 class 0x030000 Dec 13 02:24:42.573189 kernel: pci 0000:07:00.0: reg 0x10: [mem 0x94000000-0x94ffffff] Dec 13 02:24:42.573235 kernel: pci 0000:07:00.0: reg 0x14: [mem 0x95000000-0x9501ffff] Dec 13 02:24:42.573281 kernel: pci 0000:07:00.0: reg 0x18: [io 0x3000-0x307f] Dec 13 02:24:42.573369 kernel: pci 0000:07:00.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 02:24:42.573415 kernel: pci 0000:07:00.0: supports D1 D2 Dec 13 02:24:42.573461 kernel: pci 0000:07:00.0: PME# supported from D0 D1 D2 D3hot D3cold Dec 13 02:24:42.573505 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 02:24:42.573549 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 02:24:42.573593 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 02:24:42.573601 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 0 Dec 13 02:24:42.573607 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 1 Dec 13 02:24:42.573613 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 0 Dec 13 02:24:42.573618 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 0 Dec 13 02:24:42.573623 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 0 Dec 13 02:24:42.573629 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 0 Dec 13 02:24:42.573634 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 0 Dec 13 02:24:42.573640 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 0 Dec 13 02:24:42.573645 kernel: iommu: Default domain type: Translated Dec 13 02:24:42.573650 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 02:24:42.573694 kernel: pci 0000:07:00.0: vgaarb: setting as boot VGA device Dec 13 02:24:42.573740 kernel: pci 0000:07:00.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 02:24:42.573785 kernel: pci 0000:07:00.0: vgaarb: bridge control possible Dec 13 02:24:42.573792 kernel: vgaarb: loaded Dec 13 02:24:42.573798 kernel: pps_core: LinuxPPS API ver. 1 registered Dec 13 02:24:42.573803 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Dec 13 02:24:42.573810 kernel: PTP clock support registered Dec 13 02:24:42.573815 kernel: PCI: Using ACPI for IRQ routing Dec 13 02:24:42.573820 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 02:24:42.573826 kernel: e820: reserve RAM buffer [mem 0x00099800-0x0009ffff] Dec 13 02:24:42.573831 kernel: e820: reserve RAM buffer [mem 0x81b26000-0x83ffffff] Dec 13 02:24:42.573836 kernel: e820: reserve RAM buffer [mem 0x8afcd000-0x8bffffff] Dec 13 02:24:42.573841 kernel: e820: reserve RAM buffer [mem 0x8c23b000-0x8fffffff] Dec 13 02:24:42.573846 kernel: e820: reserve RAM buffer [mem 0x8ef00000-0x8fffffff] Dec 13 02:24:42.573851 kernel: e820: reserve RAM buffer [mem 0x86f000000-0x86fffffff] Dec 13 02:24:42.573857 kernel: clocksource: Switched to clocksource tsc-early Dec 13 02:24:42.573862 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 02:24:42.573868 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 02:24:42.573873 kernel: pnp: PnP ACPI init Dec 13 02:24:42.573919 kernel: system 00:00: [mem 0x40000000-0x403fffff] has been reserved Dec 13 02:24:42.573961 kernel: pnp 00:02: [dma 0 disabled] Dec 13 02:24:42.574003 kernel: pnp 00:03: [dma 0 disabled] Dec 13 02:24:42.574046 kernel: system 00:04: [io 0x0680-0x069f] has been reserved Dec 13 02:24:42.574085 kernel: system 00:04: [io 0x164e-0x164f] has been reserved Dec 13 02:24:42.574125 kernel: system 00:05: [io 0x1854-0x1857] has been reserved Dec 13 02:24:42.574165 kernel: system 00:06: [mem 0xfed10000-0xfed17fff] has been reserved Dec 13 02:24:42.574204 kernel: system 00:06: [mem 0xfed18000-0xfed18fff] has been reserved Dec 13 02:24:42.574241 kernel: system 00:06: [mem 0xfed19000-0xfed19fff] has been reserved Dec 13 02:24:42.574281 kernel: system 00:06: [mem 0xe0000000-0xefffffff] has been reserved Dec 13 02:24:42.574342 kernel: system 00:06: [mem 0xfed20000-0xfed3ffff] has been reserved Dec 13 02:24:42.574399 kernel: system 00:06: [mem 0xfed90000-0xfed93fff] could not be reserved Dec 13 02:24:42.574436 kernel: system 00:06: [mem 0xfed45000-0xfed8ffff] has been reserved Dec 13 02:24:42.574473 kernel: system 00:06: [mem 0xfee00000-0xfeefffff] could not be reserved Dec 13 02:24:42.574516 kernel: system 00:07: [io 0x1800-0x18fe] could not be reserved Dec 13 02:24:42.574554 kernel: system 00:07: [mem 0xfd000000-0xfd69ffff] has been reserved Dec 13 02:24:42.574594 kernel: system 00:07: [mem 0xfd6c0000-0xfd6cffff] has been reserved Dec 13 02:24:42.574631 kernel: system 00:07: [mem 0xfd6f0000-0xfdffffff] has been reserved Dec 13 02:24:42.574668 kernel: system 00:07: [mem 0xfe000000-0xfe01ffff] could not be reserved Dec 13 02:24:42.574705 kernel: system 00:07: [mem 0xfe200000-0xfe7fffff] has been reserved Dec 13 02:24:42.574743 kernel: system 00:07: [mem 0xff000000-0xffffffff] has been reserved Dec 13 02:24:42.574783 kernel: system 00:08: [io 0x2000-0x20fe] has been reserved Dec 13 02:24:42.574790 kernel: pnp: PnP ACPI: found 10 devices Dec 13 02:24:42.574797 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 02:24:42.574802 kernel: NET: Registered PF_INET protocol family Dec 13 02:24:42.574808 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:24:42.574813 kernel: tcp_listen_portaddr_hash hash table entries: 16384 (order: 6, 262144 bytes, linear) Dec 13 02:24:42.574818 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 02:24:42.574824 kernel: TCP established hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 02:24:42.574829 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Dec 13 02:24:42.574834 kernel: TCP: Hash tables configured (established 262144 bind 65536) Dec 13 02:24:42.574839 kernel: UDP hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:24:42.574845 kernel: UDP-Lite hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 02:24:42.574850 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 02:24:42.574855 kernel: NET: Registered PF_XDP protocol family Dec 13 02:24:42.574897 kernel: pci 0000:00:15.0: BAR 0: assigned [mem 0x95515000-0x95515fff 64bit] Dec 13 02:24:42.574940 kernel: pci 0000:00:15.1: BAR 0: assigned [mem 0x9551b000-0x9551bfff 64bit] Dec 13 02:24:42.574981 kernel: pci 0000:00:1e.0: BAR 0: assigned [mem 0x9551c000-0x9551cfff 64bit] Dec 13 02:24:42.575027 kernel: pci 0000:01:00.0: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 02:24:42.575070 kernel: pci 0000:01:00.0: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 02:24:42.575117 kernel: pci 0000:01:00.1: BAR 7: no space for [mem size 0x00800000 64bit pref] Dec 13 02:24:42.575161 kernel: pci 0000:01:00.1: BAR 7: failed to assign [mem size 0x00800000 64bit pref] Dec 13 02:24:42.575204 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Dec 13 02:24:42.575247 kernel: pci 0000:00:01.0: bridge window [mem 0x95100000-0x952fffff] Dec 13 02:24:42.575291 kernel: pci 0000:00:01.0: bridge window [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 02:24:42.575376 kernel: pci 0000:00:1b.0: PCI bridge to [bus 02] Dec 13 02:24:42.575420 kernel: pci 0000:00:1b.4: PCI bridge to [bus 03] Dec 13 02:24:42.575464 kernel: pci 0000:00:1b.4: bridge window [io 0x5000-0x5fff] Dec 13 02:24:42.575505 kernel: pci 0000:00:1b.4: bridge window [mem 0x95400000-0x954fffff] Dec 13 02:24:42.575548 kernel: pci 0000:00:1b.5: PCI bridge to [bus 04] Dec 13 02:24:42.575590 kernel: pci 0000:00:1b.5: bridge window [io 0x4000-0x4fff] Dec 13 02:24:42.575633 kernel: pci 0000:00:1b.5: bridge window [mem 0x95300000-0x953fffff] Dec 13 02:24:42.575674 kernel: pci 0000:00:1c.0: PCI bridge to [bus 05] Dec 13 02:24:42.575720 kernel: pci 0000:06:00.0: PCI bridge to [bus 07] Dec 13 02:24:42.575764 kernel: pci 0000:06:00.0: bridge window [io 0x3000-0x3fff] Dec 13 02:24:42.575807 kernel: pci 0000:06:00.0: bridge window [mem 0x94000000-0x950fffff] Dec 13 02:24:42.575850 kernel: pci 0000:00:1c.3: PCI bridge to [bus 06-07] Dec 13 02:24:42.575892 kernel: pci 0000:00:1c.3: bridge window [io 0x3000-0x3fff] Dec 13 02:24:42.575935 kernel: pci 0000:00:1c.3: bridge window [mem 0x94000000-0x950fffff] Dec 13 02:24:42.575973 kernel: pci_bus 0000:00: Some PCI device resources are unassigned, try booting with pci=realloc Dec 13 02:24:42.576011 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 02:24:42.576048 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 02:24:42.576086 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 02:24:42.576123 kernel: pci_bus 0000:00: resource 7 [mem 0x90000000-0xdfffffff window] Dec 13 02:24:42.576161 kernel: pci_bus 0000:00: resource 8 [mem 0xfc800000-0xfe7fffff window] Dec 13 02:24:42.576206 kernel: pci_bus 0000:01: resource 1 [mem 0x95100000-0x952fffff] Dec 13 02:24:42.576246 kernel: pci_bus 0000:01: resource 2 [mem 0x90000000-0x93ffffff 64bit pref] Dec 13 02:24:42.576288 kernel: pci_bus 0000:03: resource 0 [io 0x5000-0x5fff] Dec 13 02:24:42.576372 kernel: pci_bus 0000:03: resource 1 [mem 0x95400000-0x954fffff] Dec 13 02:24:42.576415 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Dec 13 02:24:42.576454 kernel: pci_bus 0000:04: resource 1 [mem 0x95300000-0x953fffff] Dec 13 02:24:42.576499 kernel: pci_bus 0000:06: resource 0 [io 0x3000-0x3fff] Dec 13 02:24:42.576539 kernel: pci_bus 0000:06: resource 1 [mem 0x94000000-0x950fffff] Dec 13 02:24:42.576579 kernel: pci_bus 0000:07: resource 0 [io 0x3000-0x3fff] Dec 13 02:24:42.576621 kernel: pci_bus 0000:07: resource 1 [mem 0x94000000-0x950fffff] Dec 13 02:24:42.576630 kernel: PCI: CLS 64 bytes, default 64 Dec 13 02:24:42.576635 kernel: DMAR: No ATSR found Dec 13 02:24:42.576640 kernel: DMAR: No SATC found Dec 13 02:24:42.576646 kernel: DMAR: dmar0: Using Queued invalidation Dec 13 02:24:42.576689 kernel: pci 0000:00:00.0: Adding to iommu group 0 Dec 13 02:24:42.576731 kernel: pci 0000:00:01.0: Adding to iommu group 1 Dec 13 02:24:42.576774 kernel: pci 0000:00:08.0: Adding to iommu group 2 Dec 13 02:24:42.576817 kernel: pci 0000:00:12.0: Adding to iommu group 3 Dec 13 02:24:42.576862 kernel: pci 0000:00:14.0: Adding to iommu group 4 Dec 13 02:24:42.576903 kernel: pci 0000:00:14.2: Adding to iommu group 4 Dec 13 02:24:42.576946 kernel: pci 0000:00:15.0: Adding to iommu group 5 Dec 13 02:24:42.576987 kernel: pci 0000:00:15.1: Adding to iommu group 5 Dec 13 02:24:42.577028 kernel: pci 0000:00:16.0: Adding to iommu group 6 Dec 13 02:24:42.577069 kernel: pci 0000:00:16.1: Adding to iommu group 6 Dec 13 02:24:42.577110 kernel: pci 0000:00:16.4: Adding to iommu group 6 Dec 13 02:24:42.577154 kernel: pci 0000:00:17.0: Adding to iommu group 7 Dec 13 02:24:42.577196 kernel: pci 0000:00:1b.0: Adding to iommu group 8 Dec 13 02:24:42.577240 kernel: pci 0000:00:1b.4: Adding to iommu group 9 Dec 13 02:24:42.577283 kernel: pci 0000:00:1b.5: Adding to iommu group 10 Dec 13 02:24:42.577351 kernel: pci 0000:00:1c.0: Adding to iommu group 11 Dec 13 02:24:42.577394 kernel: pci 0000:00:1c.3: Adding to iommu group 12 Dec 13 02:24:42.577436 kernel: pci 0000:00:1e.0: Adding to iommu group 13 Dec 13 02:24:42.577479 kernel: pci 0000:00:1f.0: Adding to iommu group 14 Dec 13 02:24:42.577521 kernel: pci 0000:00:1f.4: Adding to iommu group 14 Dec 13 02:24:42.577564 kernel: pci 0000:00:1f.5: Adding to iommu group 14 Dec 13 02:24:42.577610 kernel: pci 0000:01:00.0: Adding to iommu group 1 Dec 13 02:24:42.577654 kernel: pci 0000:01:00.1: Adding to iommu group 1 Dec 13 02:24:42.577699 kernel: pci 0000:03:00.0: Adding to iommu group 15 Dec 13 02:24:42.577743 kernel: pci 0000:04:00.0: Adding to iommu group 16 Dec 13 02:24:42.577787 kernel: pci 0000:06:00.0: Adding to iommu group 17 Dec 13 02:24:42.577833 kernel: pci 0000:07:00.0: Adding to iommu group 17 Dec 13 02:24:42.577841 kernel: DMAR: Intel(R) Virtualization Technology for Directed I/O Dec 13 02:24:42.577847 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 02:24:42.577854 kernel: software IO TLB: mapped [mem 0x0000000086fcd000-0x000000008afcd000] (64MB) Dec 13 02:24:42.577859 kernel: RAPL PMU: API unit is 2^-32 Joules, 3 fixed counters, 655360 ms ovfl timer Dec 13 02:24:42.577864 kernel: RAPL PMU: hw unit of domain pp0-core 2^-14 Joules Dec 13 02:24:42.577870 kernel: RAPL PMU: hw unit of domain package 2^-14 Joules Dec 13 02:24:42.577875 kernel: RAPL PMU: hw unit of domain dram 2^-14 Joules Dec 13 02:24:42.577921 kernel: platform rtc_cmos: registered platform RTC device (no PNP device found) Dec 13 02:24:42.577930 kernel: Initialise system trusted keyrings Dec 13 02:24:42.577935 kernel: workingset: timestamp_bits=39 max_order=23 bucket_order=0 Dec 13 02:24:42.577941 kernel: Key type asymmetric registered Dec 13 02:24:42.577947 kernel: Asymmetric key parser 'x509' registered Dec 13 02:24:42.577952 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Dec 13 02:24:42.577957 kernel: io scheduler mq-deadline registered Dec 13 02:24:42.577963 kernel: io scheduler kyber registered Dec 13 02:24:42.577968 kernel: io scheduler bfq registered Dec 13 02:24:42.578011 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 121 Dec 13 02:24:42.578054 kernel: pcieport 0000:00:1b.0: PME: Signaling with IRQ 122 Dec 13 02:24:42.578097 kernel: pcieport 0000:00:1b.4: PME: Signaling with IRQ 123 Dec 13 02:24:42.578143 kernel: pcieport 0000:00:1b.5: PME: Signaling with IRQ 124 Dec 13 02:24:42.578186 kernel: pcieport 0000:00:1c.0: PME: Signaling with IRQ 125 Dec 13 02:24:42.578229 kernel: pcieport 0000:00:1c.3: PME: Signaling with IRQ 126 Dec 13 02:24:42.578275 kernel: thermal LNXTHERM:00: registered as thermal_zone0 Dec 13 02:24:42.578284 kernel: ACPI: thermal: Thermal Zone [TZ00] (28 C) Dec 13 02:24:42.578291 kernel: ERST: Error Record Serialization Table (ERST) support is initialized. Dec 13 02:24:42.578297 kernel: pstore: Registered erst as persistent store backend Dec 13 02:24:42.578303 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 02:24:42.578309 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 02:24:42.578314 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 02:24:42.578319 kernel: 00:03: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Dec 13 02:24:42.578324 kernel: hpet_acpi_add: no address or irqs in _CRS Dec 13 02:24:42.578368 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x1B, rev-id 16) Dec 13 02:24:42.578376 kernel: i8042: PNP: No PS/2 controller found. Dec 13 02:24:42.578415 kernel: rtc_cmos rtc_cmos: RTC can wake from S4 Dec 13 02:24:42.578455 kernel: rtc_cmos rtc_cmos: registered as rtc0 Dec 13 02:24:42.578497 kernel: rtc_cmos rtc_cmos: setting system clock to 2024-12-13T02:24:41 UTC (1734056681) Dec 13 02:24:42.578536 kernel: rtc_cmos rtc_cmos: alarms up to one month, y3k, 114 bytes nvram Dec 13 02:24:42.578543 kernel: fail to initialize ptp_kvm Dec 13 02:24:42.578549 kernel: intel_pstate: Intel P-state driver initializing Dec 13 02:24:42.578554 kernel: intel_pstate: Disabling energy efficiency optimization Dec 13 02:24:42.578559 kernel: intel_pstate: HWP enabled Dec 13 02:24:42.578565 kernel: vesafb: mode is 1024x768x8, linelength=1024, pages=0 Dec 13 02:24:42.578570 kernel: vesafb: scrolling: redraw Dec 13 02:24:42.578576 kernel: vesafb: Pseudocolor: size=0:8:8:8, shift=0:0:0:0 Dec 13 02:24:42.578582 kernel: vesafb: framebuffer at 0x94000000, mapped to 0x0000000016d03639, using 768k, total 768k Dec 13 02:24:42.578587 kernel: Console: switching to colour frame buffer device 128x48 Dec 13 02:24:42.578592 kernel: fb0: VESA VGA frame buffer device Dec 13 02:24:42.578598 kernel: NET: Registered PF_INET6 protocol family Dec 13 02:24:42.578603 kernel: Segment Routing with IPv6 Dec 13 02:24:42.578608 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 02:24:42.578613 kernel: NET: Registered PF_PACKET protocol family Dec 13 02:24:42.578619 kernel: Key type dns_resolver registered Dec 13 02:24:42.578625 kernel: microcode: sig=0x906ed, pf=0x2, revision=0xf4 Dec 13 02:24:42.578630 kernel: microcode: Microcode Update Driver: v2.2. Dec 13 02:24:42.578635 kernel: IPI shorthand broadcast: enabled Dec 13 02:24:42.578641 kernel: sched_clock: Marking stable (1680561675, 1339825273)->(4464827119, -1444440171) Dec 13 02:24:42.578646 kernel: registered taskstats version 1 Dec 13 02:24:42.578651 kernel: Loading compiled-in X.509 certificates Dec 13 02:24:42.578657 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.173-flatcar: d9defb0205602bee9bb670636cbe5c74194fdb5e' Dec 13 02:24:42.578662 kernel: Key type .fscrypt registered Dec 13 02:24:42.578667 kernel: Key type fscrypt-provisioning registered Dec 13 02:24:42.578673 kernel: pstore: Using crash dump compression: deflate Dec 13 02:24:42.578678 kernel: ima: Allocated hash algorithm: sha1 Dec 13 02:24:42.578683 kernel: ima: No architecture policies found Dec 13 02:24:42.578689 kernel: clk: Disabling unused clocks Dec 13 02:24:42.578694 kernel: Freeing unused kernel image (initmem) memory: 47476K Dec 13 02:24:42.578699 kernel: Write protecting the kernel read-only data: 28672k Dec 13 02:24:42.578705 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Dec 13 02:24:42.578710 kernel: Freeing unused kernel image (rodata/data gap) memory: 620K Dec 13 02:24:42.578715 kernel: Run /init as init process Dec 13 02:24:42.578721 kernel: with arguments: Dec 13 02:24:42.578727 kernel: /init Dec 13 02:24:42.578732 kernel: with environment: Dec 13 02:24:42.578737 kernel: HOME=/ Dec 13 02:24:42.578742 kernel: TERM=linux Dec 13 02:24:42.578747 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 02:24:42.578754 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:24:42.578760 systemd[1]: Detected architecture x86-64. Dec 13 02:24:42.578767 systemd[1]: Running in initrd. Dec 13 02:24:42.578772 systemd[1]: No hostname configured, using default hostname. Dec 13 02:24:42.578777 systemd[1]: Hostname set to . Dec 13 02:24:42.578783 systemd[1]: Initializing machine ID from random generator. Dec 13 02:24:42.578788 systemd[1]: Queued start job for default target initrd.target. Dec 13 02:24:42.578794 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:24:42.578799 systemd[1]: Reached target cryptsetup.target. Dec 13 02:24:42.578805 systemd[1]: Reached target paths.target. Dec 13 02:24:42.578811 systemd[1]: Reached target slices.target. Dec 13 02:24:42.578816 systemd[1]: Reached target swap.target. Dec 13 02:24:42.578821 systemd[1]: Reached target timers.target. Dec 13 02:24:42.578827 systemd[1]: Listening on iscsid.socket. Dec 13 02:24:42.578832 systemd[1]: Listening on iscsiuio.socket. Dec 13 02:24:42.578838 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:24:42.578843 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:24:42.578850 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:24:42.578855 kernel: tsc: Refined TSC clocksource calibration: 3407.999 MHz Dec 13 02:24:42.578860 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd336761, max_idle_ns: 440795243819 ns Dec 13 02:24:42.578866 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:24:42.578871 kernel: clocksource: Switched to clocksource tsc Dec 13 02:24:42.578877 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:24:42.578882 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:24:42.578888 systemd[1]: Reached target sockets.target. Dec 13 02:24:42.578893 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:24:42.578900 systemd[1]: Finished network-cleanup.service. Dec 13 02:24:42.578905 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 02:24:42.578911 systemd[1]: Starting systemd-journald.service... Dec 13 02:24:42.578916 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:24:42.578924 systemd-journald[266]: Journal started Dec 13 02:24:42.578950 systemd-journald[266]: Runtime Journal (/run/log/journal/c9da6d612f444fdd970437bdaa3cffa5) is 8.0M, max 640.1M, 632.1M free. Dec 13 02:24:42.581710 systemd-modules-load[267]: Inserted module 'overlay' Dec 13 02:24:42.587000 audit: BPF prog-id=6 op=LOAD Dec 13 02:24:42.606330 kernel: audit: type=1334 audit(1734056682.587:2): prog-id=6 op=LOAD Dec 13 02:24:42.606380 systemd[1]: Starting systemd-resolved.service... Dec 13 02:24:42.656334 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 02:24:42.656350 systemd[1]: Starting systemd-vconsole-setup.service... Dec 13 02:24:42.689326 kernel: Bridge firewalling registered Dec 13 02:24:42.689341 systemd[1]: Started systemd-journald.service. Dec 13 02:24:42.703698 systemd-modules-load[267]: Inserted module 'br_netfilter' Dec 13 02:24:42.751622 kernel: audit: type=1130 audit(1734056682.710:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.710000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.706238 systemd-resolved[269]: Positive Trust Anchors: Dec 13 02:24:42.808592 kernel: SCSI subsystem initialized Dec 13 02:24:42.808613 kernel: audit: type=1130 audit(1734056682.762:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.706243 systemd-resolved[269]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:24:42.932377 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 02:24:42.932390 kernel: audit: type=1130 audit(1734056682.833:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.932398 kernel: device-mapper: uevent: version 1.0.3 Dec 13 02:24:42.932405 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Dec 13 02:24:42.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.706268 systemd-resolved[269]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:24:43.005559 kernel: audit: type=1130 audit(1734056682.939:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.707834 systemd-resolved[269]: Defaulting to hostname 'linux'. Dec 13 02:24:43.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.711536 systemd[1]: Started systemd-resolved.service. Dec 13 02:24:43.114806 kernel: audit: type=1130 audit(1734056683.013:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:43.114830 kernel: audit: type=1130 audit(1734056683.067:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:43.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:42.763479 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:24:42.834458 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 02:24:42.933206 systemd-modules-load[267]: Inserted module 'dm_multipath' Dec 13 02:24:42.940616 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:24:43.014688 systemd[1]: Finished systemd-vconsole-setup.service. Dec 13 02:24:43.068607 systemd[1]: Reached target nss-lookup.target. Dec 13 02:24:43.124014 systemd[1]: Starting dracut-cmdline-ask.service... Dec 13 02:24:43.144176 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:24:43.144598 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:24:43.147534 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:24:43.146000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:43.148190 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:24:43.197523 kernel: audit: type=1130 audit(1734056683.146:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:43.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:43.209660 systemd[1]: Finished dracut-cmdline-ask.service. Dec 13 02:24:43.276326 kernel: audit: type=1130 audit(1734056683.208:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:43.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:43.267911 systemd[1]: Starting dracut-cmdline.service... Dec 13 02:24:43.290402 dracut-cmdline[293]: dracut-dracut-053 Dec 13 02:24:43.290402 dracut-cmdline[293]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LA Dec 13 02:24:43.290402 dracut-cmdline[293]: BEL=ROOT console=tty0 console=ttyS1,115200n8 flatcar.first_boot=detected flatcar.oem.id=packet flatcar.autologin verity.usrhash=66bd2580285375a2ba5b0e34ba63606314bcd90aaed1de1996371bdcb032485c Dec 13 02:24:43.361333 kernel: Loading iSCSI transport class v2.0-870. Dec 13 02:24:43.361347 kernel: iscsi: registered transport (tcp) Dec 13 02:24:43.415155 kernel: iscsi: registered transport (qla4xxx) Dec 13 02:24:43.415201 kernel: QLogic iSCSI HBA Driver Dec 13 02:24:43.431287 systemd[1]: Finished dracut-cmdline.service. Dec 13 02:24:43.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:43.431815 systemd[1]: Starting dracut-pre-udev.service... Dec 13 02:24:43.487295 kernel: raid6: avx2x4 gen() 49964 MB/s Dec 13 02:24:43.522295 kernel: raid6: avx2x4 xor() 21769 MB/s Dec 13 02:24:43.557295 kernel: raid6: avx2x2 gen() 54880 MB/s Dec 13 02:24:43.592357 kernel: raid6: avx2x2 xor() 32815 MB/s Dec 13 02:24:43.627357 kernel: raid6: avx2x1 gen() 46212 MB/s Dec 13 02:24:43.662354 kernel: raid6: avx2x1 xor() 28522 MB/s Dec 13 02:24:43.695294 kernel: raid6: sse2x4 gen() 21829 MB/s Dec 13 02:24:43.729322 kernel: raid6: sse2x4 xor() 11988 MB/s Dec 13 02:24:43.763325 kernel: raid6: sse2x2 gen() 22105 MB/s Dec 13 02:24:43.797357 kernel: raid6: sse2x2 xor() 13737 MB/s Dec 13 02:24:43.831359 kernel: raid6: sse2x1 gen() 18704 MB/s Dec 13 02:24:43.883269 kernel: raid6: sse2x1 xor() 9120 MB/s Dec 13 02:24:43.883284 kernel: raid6: using algorithm avx2x2 gen() 54880 MB/s Dec 13 02:24:43.883295 kernel: raid6: .... xor() 32815 MB/s, rmw enabled Dec 13 02:24:43.901499 kernel: raid6: using avx2x2 recovery algorithm Dec 13 02:24:43.948296 kernel: xor: automatically using best checksumming function avx Dec 13 02:24:44.026324 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Dec 13 02:24:44.031303 systemd[1]: Finished dracut-pre-udev.service. Dec 13 02:24:44.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:44.038000 audit: BPF prog-id=7 op=LOAD Dec 13 02:24:44.038000 audit: BPF prog-id=8 op=LOAD Dec 13 02:24:44.040354 systemd[1]: Starting systemd-udevd.service... Dec 13 02:24:44.049028 systemd-udevd[471]: Using default interface naming scheme 'v252'. Dec 13 02:24:44.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:44.053603 systemd[1]: Started systemd-udevd.service. Dec 13 02:24:44.093414 dracut-pre-trigger[482]: rd.md=0: removing MD RAID activation Dec 13 02:24:44.069005 systemd[1]: Starting dracut-pre-trigger.service... Dec 13 02:24:44.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:44.095561 systemd[1]: Finished dracut-pre-trigger.service. Dec 13 02:24:44.111447 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:24:44.191209 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:24:44.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:44.218302 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 02:24:44.224302 kernel: libata version 3.00 loaded. Dec 13 02:24:44.244304 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 02:24:44.244348 kernel: AES CTR mode by8 optimization enabled Dec 13 02:24:44.244365 kernel: ACPI: bus type USB registered Dec 13 02:24:44.278303 kernel: igb: Intel(R) Gigabit Ethernet Network Driver Dec 13 02:24:44.313699 kernel: usbcore: registered new interface driver usbfs Dec 13 02:24:44.313720 kernel: igb: Copyright (c) 2007-2014 Intel Corporation. Dec 13 02:24:44.313730 kernel: usbcore: registered new interface driver hub Dec 13 02:24:44.365352 kernel: usbcore: registered new device driver usb Dec 13 02:24:44.368296 kernel: ahci 0000:00:17.0: version 3.0 Dec 13 02:24:44.397548 kernel: mlx5_core 0000:01:00.0: firmware version: 14.27.1016 Dec 13 02:24:45.078487 kernel: pps pps0: new PPS source ptp0 Dec 13 02:24:45.078567 kernel: ahci 0000:00:17.0: AHCI 0001.0301 32 slots 7 ports 6 Gbps 0x7f impl SATA mode Dec 13 02:24:45.078698 kernel: ahci 0000:00:17.0: flags: 64bit ncq sntf clo only pio slum part ems deso sadm sds apst Dec 13 02:24:45.078919 kernel: mlx5_core 0000:01:00.0: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 02:24:45.078981 kernel: scsi host0: ahci Dec 13 02:24:45.079052 kernel: scsi host1: ahci Dec 13 02:24:45.079116 kernel: scsi host2: ahci Dec 13 02:24:45.079178 kernel: scsi host3: ahci Dec 13 02:24:45.079236 kernel: scsi host4: ahci Dec 13 02:24:45.079299 kernel: scsi host5: ahci Dec 13 02:24:45.079366 kernel: scsi host6: ahci Dec 13 02:24:45.079428 kernel: ata1: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516100 irq 132 Dec 13 02:24:45.079438 kernel: ata2: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516180 irq 132 Dec 13 02:24:45.079446 kernel: ata3: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516200 irq 132 Dec 13 02:24:45.079454 kernel: ata4: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516280 irq 132 Dec 13 02:24:45.079462 kernel: ata5: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516300 irq 132 Dec 13 02:24:45.079469 kernel: ata6: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516380 irq 132 Dec 13 02:24:45.079477 kernel: ata7: SATA max UDMA/133 abar m2048@0x95516000 port 0x95516400 irq 132 Dec 13 02:24:45.079485 kernel: igb 0000:03:00.0: added PHC on eth0 Dec 13 02:24:45.079549 kernel: igb 0000:03:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 02:24:45.079610 kernel: igb 0000:03:00.0: eth0: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:5e Dec 13 02:24:45.079670 kernel: mlx5_core 0000:01:00.0: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 02:24:45.079730 kernel: igb 0000:03:00.0: eth0: PBA No: 010000-000 Dec 13 02:24:45.079789 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 02:24:45.079798 kernel: igb 0000:03:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 02:24:45.079858 kernel: mlx5_core 0000:01:00.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 02:24:45.079917 kernel: ata1: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 02:24:45.079926 kernel: pps pps1: new PPS source ptp2 Dec 13 02:24:45.079987 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 02:24:45.079996 kernel: igb 0000:04:00.0: added PHC on eth1 Dec 13 02:24:45.080058 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 02:24:45.080066 kernel: igb 0000:04:00.0: Intel(R) Gigabit Ethernet Network Connection Dec 13 02:24:45.080125 kernel: ata7: SATA link down (SStatus 0 SControl 300) Dec 13 02:24:45.080134 kernel: igb 0000:04:00.0: eth1: (PCIe:2.5Gb/s:Width x1) 3c:ec:ef:6a:f0:5f Dec 13 02:24:45.080192 kernel: ata1.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 02:24:45.080202 kernel: igb 0000:04:00.0: eth1: PBA No: 010000-000 Dec 13 02:24:45.080261 kernel: ata2: SATA link up 6.0 Gbps (SStatus 133 SControl 300) Dec 13 02:24:45.080270 kernel: igb 0000:04:00.0: Using MSI-X interrupts. 4 rx queue(s), 4 tx queue(s) Dec 13 02:24:45.080333 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 02:24:45.080341 kernel: ata2.00: ATA-11: Micron_5300_MTFDDAK480TDT, D3MU001, max UDMA/133 Dec 13 02:24:45.080349 kernel: ata1.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 02:24:45.080357 kernel: ata1.00: Features: NCQ-prio Dec 13 02:24:45.080365 kernel: ata2.00: 937703088 sectors, multi 16: LBA48 NCQ (depth 32), AA Dec 13 02:24:45.080372 kernel: ata2.00: Features: NCQ-prio Dec 13 02:24:45.080382 kernel: ata1.00: configured for UDMA/133 Dec 13 02:24:45.080390 kernel: ata2.00: configured for UDMA/133 Dec 13 02:24:45.080398 kernel: scsi 0:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 02:24:45.080467 kernel: mlx5_core 0000:01:00.0: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 02:24:45.080528 kernel: scsi 1:0:0:0: Direct-Access ATA Micron_5300_MTFD U001 PQ: 0 ANSI: 5 Dec 13 02:24:45.080545 kernel: mlx5_core 0000:01:00.1: firmware version: 14.27.1016 Dec 13 02:24:45.750753 kernel: mlx5_core 0000:01:00.1: 63.008 Gb/s available PCIe bandwidth (8.0 GT/s PCIe x8 link) Dec 13 02:24:45.750824 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 02:24:45.750879 kernel: igb 0000:03:00.0 eno1: renamed from eth0 Dec 13 02:24:45.750933 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 1 Dec 13 02:24:45.750984 kernel: xhci_hcd 0000:00:14.0: hcc params 0x200077c1 hci version 0x110 quirks 0x0000000000009810 Dec 13 02:24:45.751033 kernel: xhci_hcd 0000:00:14.0: xHCI Host Controller Dec 13 02:24:45.751081 kernel: xhci_hcd 0000:00:14.0: new USB bus registered, assigned bus number 2 Dec 13 02:24:45.751129 kernel: xhci_hcd 0000:00:14.0: Host supports USB 3.1 Enhanced SuperSpeed Dec 13 02:24:45.751177 kernel: hub 1-0:1.0: USB hub found Dec 13 02:24:45.751241 kernel: hub 1-0:1.0: 16 ports detected Dec 13 02:24:45.751298 kernel: hub 2-0:1.0: USB hub found Dec 13 02:24:45.751359 kernel: hub 2-0:1.0: 10 ports detected Dec 13 02:24:45.751412 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:24:45.751420 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 02:24:45.751426 kernel: sd 0:0:0:0: [sda] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 02:24:45.783785 kernel: sd 1:0:0:0: [sdb] 937703088 512-byte logical blocks: (480 GB/447 GiB) Dec 13 02:24:45.783875 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Dec 13 02:24:45.783963 kernel: sd 0:0:0:0: [sda] Write Protect is off Dec 13 02:24:45.784052 kernel: igb 0000:04:00.0 eno2: renamed from eth1 Dec 13 02:24:45.784135 kernel: sd 1:0:0:0: [sdb] 4096-byte physical blocks Dec 13 02:24:45.784227 kernel: sd 0:0:0:0: [sda] Mode Sense: 00 3a 00 00 Dec 13 02:24:45.784287 kernel: sd 1:0:0:0: [sdb] Write Protect is off Dec 13 02:24:45.784353 kernel: mlx5_core 0000:01:00.1: E-Switch: Total vports 10, per vport: max uc(1024) max mc(16384) Dec 13 02:24:45.784413 kernel: port_module: 9 callbacks suppressed Dec 13 02:24:45.784421 kernel: mlx5_core 0000:01:00.1: Port module event: module 1, Cable plugged Dec 13 02:24:45.784475 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:24:45.784555 kernel: sd 1:0:0:0: [sdb] Mode Sense: 00 3a 00 00 Dec 13 02:24:45.784628 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:24:45.784637 kernel: sd 1:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Dec 13 02:24:45.784713 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 02:24:45.784721 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 02:24:45.784729 kernel: mlx5_core 0000:01:00.1: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Dec 13 02:24:45.784784 kernel: GPT:9289727 != 937703087 Dec 13 02:24:45.784792 kernel: usb 1-14: new high-speed USB device number 2 using xhci_hcd Dec 13 02:24:46.794275 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 02:24:46.794355 kernel: ata2.00: Enabling discard_zeroes_data Dec 13 02:24:46.794393 kernel: sd 1:0:0:0: [sdb] Attached SCSI disk Dec 13 02:24:46.794764 kernel: GPT:9289727 != 937703087 Dec 13 02:24:46.794817 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 02:24:46.794851 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:24:46.794885 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:24:46.794918 kernel: mlx5_core 0000:01:00.1: Supported tc offload range - chains: 4294967294, prios: 4294967295 Dec 13 02:24:46.795218 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Dec 13 02:24:46.795533 kernel: hub 1-14:1.0: USB hub found Dec 13 02:24:46.795933 kernel: hub 1-14:1.0: 4 ports detected Dec 13 02:24:46.796238 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: renamed from eth0 Dec 13 02:24:46.796547 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (524) Dec 13 02:24:46.796589 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: renamed from eth2 Dec 13 02:24:46.796852 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:24:46.796890 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:24:46.796926 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:24:46.796975 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:24:46.797033 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:24:46.797086 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:24:45.812666 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Dec 13 02:24:45.840616 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Dec 13 02:24:45.866002 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Dec 13 02:24:46.830507 disk-uuid[687]: Primary Header is updated. Dec 13 02:24:46.830507 disk-uuid[687]: Secondary Entries is updated. Dec 13 02:24:46.830507 disk-uuid[687]: Secondary Header is updated. Dec 13 02:24:45.878265 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Dec 13 02:24:45.891394 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Dec 13 02:24:45.902406 systemd[1]: Starting disk-uuid.service... Dec 13 02:24:46.985774 kernel: ata1.00: Enabling discard_zeroes_data Dec 13 02:24:47.002839 disk-uuid[688]: The operation has completed successfully. Dec 13 02:24:47.011395 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Dec 13 02:24:47.036983 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 02:24:47.037047 systemd[1]: Finished disk-uuid.service. Dec 13 02:24:47.152847 kernel: audit: type=1130 audit(1734056687.051:19): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.152864 kernel: usb 1-14.1: new low-speed USB device number 3 using xhci_hcd Dec 13 02:24:47.152892 kernel: audit: type=1131 audit(1734056687.051:20): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.060112 systemd[1]: Starting verity-setup.service... Dec 13 02:24:47.180401 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Dec 13 02:24:47.210418 systemd[1]: Found device dev-mapper-usr.device. Dec 13 02:24:47.211353 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 02:24:47.231139 systemd[1]: Mounting sysusr-usr.mount... Dec 13 02:24:47.247252 kernel: usbcore: registered new interface driver usbhid Dec 13 02:24:47.247264 kernel: usbhid: USB HID core driver Dec 13 02:24:47.270539 systemd[1]: Finished verity-setup.service. Dec 13 02:24:47.317408 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.0/0003:0557:2419.0001/input/input0 Dec 13 02:24:47.317423 kernel: audit: type=1130 audit(1734056687.284:21): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.414048 kernel: hid-generic 0003:0557:2419.0001: input,hidraw0: USB HID v1.00 Keyboard [HID 0557:2419] on usb-0000:00:14.0-14.1/input0 Dec 13 02:24:47.414227 kernel: input: HID 0557:2419 as /devices/pci0000:00/0000:00:14.0/usb1/1-14/1-14.1/1-14.1:1.1/0003:0557:2419.0002/input/input1 Dec 13 02:24:47.414236 kernel: hid-generic 0003:0557:2419.0002: input,hidraw1: USB HID v1.00 Mouse [HID 0557:2419] on usb-0000:00:14.0-14.1/input1 Dec 13 02:24:47.482295 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Dec 13 02:24:47.482469 systemd[1]: Mounted sysusr-usr.mount. Dec 13 02:24:47.482637 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Dec 13 02:24:47.585355 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:24:47.585370 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:24:47.585377 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:24:47.585384 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:24:47.483036 systemd[1]: Starting ignition-setup.service... Dec 13 02:24:47.573999 systemd[1]: Starting parse-ip-for-networkd.service... Dec 13 02:24:47.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.658320 kernel: audit: type=1130 audit(1734056687.607:22): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.593659 systemd[1]: Finished ignition-setup.service. Dec 13 02:24:47.608886 systemd[1]: Starting ignition-fetch-offline.service... Dec 13 02:24:47.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.666616 systemd[1]: Finished parse-ip-for-networkd.service. Dec 13 02:24:47.754728 kernel: audit: type=1130 audit(1734056687.681:23): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.754746 kernel: audit: type=1334 audit(1734056687.730:24): prog-id=9 op=LOAD Dec 13 02:24:47.730000 audit: BPF prog-id=9 op=LOAD Dec 13 02:24:47.734528 ignition[865]: Ignition 2.14.0 Dec 13 02:24:47.732266 systemd[1]: Starting systemd-networkd.service... Dec 13 02:24:47.834514 kernel: audit: type=1130 audit(1734056687.773:25): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.734533 ignition[865]: Stage: fetch-offline Dec 13 02:24:47.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.747888 unknown[865]: fetched base config from "system" Dec 13 02:24:47.910469 kernel: audit: type=1130 audit(1734056687.842:26): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.734557 ignition[865]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:24:47.747892 unknown[865]: fetched user config from "system" Dec 13 02:24:47.971242 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 02:24:47.971330 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f1np1: link becomes ready Dec 13 02:24:47.734571 ignition[865]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:24:47.761493 systemd[1]: Finished ignition-fetch-offline.service. Dec 13 02:24:47.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.737120 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:24:48.054384 kernel: audit: type=1130 audit(1734056687.982:27): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.768421 systemd-networkd[881]: lo: Link UP Dec 13 02:24:48.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.737184 ignition[865]: parsed url from cmdline: "" Dec 13 02:24:48.068674 iscsid[901]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:24:48.068674 iscsid[901]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Dec 13 02:24:48.068674 iscsid[901]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Dec 13 02:24:48.068674 iscsid[901]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Dec 13 02:24:48.068674 iscsid[901]: If using hardware iscsi like qla4xxx this message can be ignored. Dec 13 02:24:48.068674 iscsid[901]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Dec 13 02:24:48.068674 iscsid[901]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Dec 13 02:24:48.254485 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 02:24:48.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:48.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:47.768423 systemd-networkd[881]: lo: Gained carrier Dec 13 02:24:47.737186 ignition[865]: no config URL provided Dec 13 02:24:47.768706 systemd-networkd[881]: Enumeration completed Dec 13 02:24:47.737189 ignition[865]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 02:24:47.769490 systemd-networkd[881]: enp1s0f1np1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:24:47.737210 ignition[865]: parsing config with SHA512: 44436f1daf06e6651e122320232f26eaed6382a5158b2e65b0f1eb421e7fad911ae932c62aca1abdda0f38b11e493c8ee830873d20dd60da960efca423ab7830 Dec 13 02:24:47.774554 systemd[1]: Started systemd-networkd.service. Dec 13 02:24:47.748194 ignition[865]: fetch-offline: fetch-offline passed Dec 13 02:24:47.843542 systemd[1]: Reached target network.target. Dec 13 02:24:47.748197 ignition[865]: POST message to Packet Timeline Dec 13 02:24:47.903506 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 02:24:47.748202 ignition[865]: POST Status error: resource requires networking Dec 13 02:24:47.903972 systemd[1]: Starting ignition-kargs.service... Dec 13 02:24:47.748236 ignition[865]: Ignition finished successfully Dec 13 02:24:47.917862 systemd[1]: Starting iscsiuio.service... Dec 13 02:24:47.908569 ignition[887]: Ignition 2.14.0 Dec 13 02:24:47.947761 systemd-networkd[881]: enp1s0f0np0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:24:47.908573 ignition[887]: Stage: kargs Dec 13 02:24:47.960508 systemd[1]: Started iscsiuio.service. Dec 13 02:24:47.908630 ignition[887]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:24:47.984086 systemd[1]: Starting iscsid.service... Dec 13 02:24:47.908640 ignition[887]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:24:48.044487 systemd[1]: Started iscsid.service. Dec 13 02:24:47.909941 ignition[887]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:24:48.062036 systemd[1]: Starting dracut-initqueue.service... Dec 13 02:24:47.911342 ignition[887]: kargs: kargs passed Dec 13 02:24:48.075679 systemd[1]: Finished dracut-initqueue.service. Dec 13 02:24:47.911345 ignition[887]: POST message to Packet Timeline Dec 13 02:24:48.094625 systemd[1]: Reached target remote-fs-pre.target. Dec 13 02:24:47.911359 ignition[887]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:24:48.120576 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:24:47.914229 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:42125->[::1]:53: read: connection refused Dec 13 02:24:48.159534 systemd[1]: Reached target remote-fs.target. Dec 13 02:24:48.114748 ignition[887]: GET https://metadata.packet.net/metadata: attempt #2 Dec 13 02:24:48.191177 systemd-networkd[881]: eno2: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:24:48.115436 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:58332->[::1]:53: read: connection refused Dec 13 02:24:48.205236 systemd[1]: Starting dracut-pre-mount.service... Dec 13 02:24:48.516221 ignition[887]: GET https://metadata.packet.net/metadata: attempt #3 Dec 13 02:24:48.219628 systemd-networkd[881]: eno1: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 02:24:48.517655 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:38558->[::1]:53: read: connection refused Dec 13 02:24:48.222664 systemd[1]: Finished dracut-pre-mount.service. Dec 13 02:24:48.247944 systemd-networkd[881]: enp1s0f1np1: Link UP Dec 13 02:24:48.248120 systemd-networkd[881]: enp1s0f1np1: Gained carrier Dec 13 02:24:48.260709 systemd-networkd[881]: enp1s0f0np0: Link UP Dec 13 02:24:48.260972 systemd-networkd[881]: eno2: Link UP Dec 13 02:24:48.261213 systemd-networkd[881]: eno1: Link UP Dec 13 02:24:48.993047 systemd-networkd[881]: enp1s0f0np0: Gained carrier Dec 13 02:24:49.001546 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): enp1s0f0np0: link becomes ready Dec 13 02:24:49.016544 systemd-networkd[881]: enp1s0f0np0: DHCPv4 address 139.178.70.53/31, gateway 139.178.70.52 acquired from 145.40.83.140 Dec 13 02:24:49.317912 ignition[887]: GET https://metadata.packet.net/metadata: attempt #4 Dec 13 02:24:49.319054 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:47613->[::1]:53: read: connection refused Dec 13 02:24:49.929890 systemd-networkd[881]: enp1s0f1np1: Gained IPv6LL Dec 13 02:24:50.505616 systemd-networkd[881]: enp1s0f0np0: Gained IPv6LL Dec 13 02:24:50.920740 ignition[887]: GET https://metadata.packet.net/metadata: attempt #5 Dec 13 02:24:50.922064 ignition[887]: GET error: Get "https://metadata.packet.net/metadata": dial tcp: lookup metadata.packet.net on [::1]:53: read udp [::1]:34443->[::1]:53: read: connection refused Dec 13 02:24:54.125280 ignition[887]: GET https://metadata.packet.net/metadata: attempt #6 Dec 13 02:24:55.770191 ignition[887]: GET result: OK Dec 13 02:24:56.114453 ignition[887]: Ignition finished successfully Dec 13 02:24:56.117010 systemd[1]: Finished ignition-kargs.service. Dec 13 02:24:56.208464 kernel: kauditd_printk_skb: 3 callbacks suppressed Dec 13 02:24:56.208495 kernel: audit: type=1130 audit(1734056696.129:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:56.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:56.139332 ignition[918]: Ignition 2.14.0 Dec 13 02:24:56.132682 systemd[1]: Starting ignition-disks.service... Dec 13 02:24:56.139336 ignition[918]: Stage: disks Dec 13 02:24:56.139394 ignition[918]: reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:24:56.139404 ignition[918]: parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:24:56.141687 ignition[918]: no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:24:56.142308 ignition[918]: disks: disks passed Dec 13 02:24:56.142311 ignition[918]: POST message to Packet Timeline Dec 13 02:24:56.142321 ignition[918]: GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:24:56.829186 ignition[918]: GET result: OK Dec 13 02:24:57.147351 ignition[918]: Ignition finished successfully Dec 13 02:24:57.148480 systemd[1]: Finished ignition-disks.service. Dec 13 02:24:57.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:57.162759 systemd[1]: Reached target initrd-root-device.target. Dec 13 02:24:57.228589 kernel: audit: type=1130 audit(1734056697.161:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:57.228548 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:24:57.242519 systemd[1]: Reached target local-fs.target. Dec 13 02:24:57.242627 systemd[1]: Reached target sysinit.target. Dec 13 02:24:57.269511 systemd[1]: Reached target basic.target. Dec 13 02:24:57.283325 systemd[1]: Starting systemd-fsck-root.service... Dec 13 02:24:57.312545 systemd-fsck[932]: ROOT: clean, 621/553520 files, 56021/553472 blocks Dec 13 02:24:57.324911 systemd[1]: Finished systemd-fsck-root.service. Dec 13 02:24:57.420312 kernel: audit: type=1130 audit(1734056697.332:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:57.420327 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Dec 13 02:24:57.332000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:57.339413 systemd[1]: Mounting sysroot.mount... Dec 13 02:24:57.428021 systemd[1]: Mounted sysroot.mount. Dec 13 02:24:57.441652 systemd[1]: Reached target initrd-root-fs.target. Dec 13 02:24:57.463279 systemd[1]: Mounting sysroot-usr.mount... Dec 13 02:24:57.471227 systemd[1]: Starting flatcar-metadata-hostname.service... Dec 13 02:24:57.477946 systemd[1]: Starting flatcar-static-network.service... Dec 13 02:24:57.499419 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 02:24:57.499534 systemd[1]: Reached target ignition-diskful.target. Dec 13 02:24:57.518705 systemd[1]: Mounted sysroot-usr.mount. Dec 13 02:24:57.541555 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:24:57.609402 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (945) Dec 13 02:24:57.609424 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:24:57.554140 systemd[1]: Starting initrd-setup-root.service... Dec 13 02:24:57.690120 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:24:57.690137 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:24:57.690145 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:24:57.690156 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 02:24:57.753579 kernel: audit: type=1130 audit(1734056697.697:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:57.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:57.753620 coreos-metadata[940]: Dec 13 02:24:57.619 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 02:24:57.622345 systemd[1]: Finished initrd-setup-root.service. Dec 13 02:24:57.775616 coreos-metadata[939]: Dec 13 02:24:57.619 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 02:24:57.800403 initrd-setup-root[958]: cut: /sysroot/etc/group: No such file or directory Dec 13 02:24:57.699600 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:24:57.816536 initrd-setup-root[966]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 02:24:57.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:57.762964 systemd[1]: Starting ignition-mount.service... Dec 13 02:24:57.902520 kernel: audit: type=1130 audit(1734056697.829:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:57.902536 initrd-setup-root[974]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 02:24:57.788950 systemd[1]: Starting sysroot-boot.service... Dec 13 02:24:57.920530 ignition[1016]: INFO : Ignition 2.14.0 Dec 13 02:24:57.920530 ignition[1016]: INFO : Stage: mount Dec 13 02:24:57.920530 ignition[1016]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:24:57.920530 ignition[1016]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:24:57.920530 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:24:57.920530 ignition[1016]: INFO : mount: mount passed Dec 13 02:24:57.920530 ignition[1016]: INFO : POST message to Packet Timeline Dec 13 02:24:57.920530 ignition[1016]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:24:57.808416 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Dec 13 02:24:57.808472 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Dec 13 02:24:57.809354 systemd[1]: Finished sysroot-boot.service. Dec 13 02:24:58.059796 coreos-metadata[940]: Dec 13 02:24:58.059 INFO Fetch successful Dec 13 02:24:58.138064 systemd[1]: flatcar-static-network.service: Deactivated successfully. Dec 13 02:24:58.138121 systemd[1]: Finished flatcar-static-network.service. Dec 13 02:24:58.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:58.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:58.271644 kernel: audit: type=1130 audit(1734056698.155:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:58.271667 kernel: audit: type=1131 audit(1734056698.155:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-static-network comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:58.593713 coreos-metadata[939]: Dec 13 02:24:58.593 INFO Fetch successful Dec 13 02:24:58.633749 coreos-metadata[939]: Dec 13 02:24:58.633 INFO wrote hostname ci-3510.3.6-a-cefcb26589 to /sysroot/etc/hostname Dec 13 02:24:58.634456 systemd[1]: Finished flatcar-metadata-hostname.service. Dec 13 02:24:58.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:58.714498 kernel: audit: type=1130 audit(1734056698.654:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:58.742185 ignition[1016]: INFO : GET result: OK Dec 13 02:24:59.135082 ignition[1016]: INFO : Ignition finished successfully Dec 13 02:24:59.137794 systemd[1]: Finished ignition-mount.service. Dec 13 02:24:59.151000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:59.154542 systemd[1]: Starting ignition-files.service... Dec 13 02:24:59.226537 kernel: audit: type=1130 audit(1734056699.151:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:24:59.220239 systemd[1]: Mounting sysroot-usr-share-oem.mount... Dec 13 02:24:59.275370 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1031) Dec 13 02:24:59.275395 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 02:24:59.308578 kernel: BTRFS info (device sda6): using free space tree Dec 13 02:24:59.308611 kernel: BTRFS info (device sda6): has skinny extents Dec 13 02:24:59.359294 kernel: BTRFS info (device sda6): enabling ssd optimizations Dec 13 02:24:59.360449 systemd[1]: Mounted sysroot-usr-share-oem.mount. Dec 13 02:24:59.376438 ignition[1050]: INFO : Ignition 2.14.0 Dec 13 02:24:59.376438 ignition[1050]: INFO : Stage: files Dec 13 02:24:59.376438 ignition[1050]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:24:59.376438 ignition[1050]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:24:59.376438 ignition[1050]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:24:59.380300 unknown[1050]: wrote ssh authorized keys file for user: core Dec 13 02:24:59.440410 ignition[1050]: DEBUG : files: compiled without relabeling support, skipping Dec 13 02:24:59.440410 ignition[1050]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 02:24:59.440410 ignition[1050]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 02:24:59.440410 ignition[1050]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 02:24:59.440410 ignition[1050]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 02:24:59.440410 ignition[1050]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 02:24:59.440410 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:24:59.440410 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Dec 13 02:24:59.440410 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:24:59.440410 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Dec 13 02:24:59.440410 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Dec 13 02:24:59.584594 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Dec 13 02:24:59.584594 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:24:59.584594 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Dec 13 02:24:59.970191 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Dec 13 02:25:00.027900 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Dec 13 02:25:00.027900 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/install.sh" Dec 13 02:25:00.077601 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (1058) Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nginx.yaml" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(c): oem config not found in "/usr/share/oem", looking on oem partition Dec 13 02:25:00.077676 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem13782105" Dec 13 02:25:00.077676 ignition[1050]: CRITICAL : files: createFilesystemsFiles: createFiles: op(c): op(d): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem13782105": device or resource busy Dec 13 02:25:00.339637 ignition[1050]: ERROR : files: createFilesystemsFiles: createFiles: op(c): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem13782105", trying btrfs: device or resource busy Dec 13 02:25:00.339637 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem13782105" Dec 13 02:25:00.339637 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(e): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem13782105" Dec 13 02:25:00.339637 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [started] unmounting "/mnt/oem13782105" Dec 13 02:25:00.339637 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(c): op(f): [finished] unmounting "/mnt/oem13782105" Dec 13 02:25:00.339637 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/etc/systemd/system/packet-phone-home.service" Dec 13 02:25:00.339637 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:25:00.339637 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 02:25:00.498730 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(10): GET result: OK Dec 13 02:25:00.755675 ignition[1050]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 02:25:00.755675 ignition[1050]: INFO : files: op(11): [started] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:25:00.755675 ignition[1050]: INFO : files: op(11): [finished] processing unit "coreos-metadata-sshkeys@.service" Dec 13 02:25:00.755675 ignition[1050]: INFO : files: op(12): [started] processing unit "packet-phone-home.service" Dec 13 02:25:00.755675 ignition[1050]: INFO : files: op(12): [finished] processing unit "packet-phone-home.service" Dec 13 02:25:00.755675 ignition[1050]: INFO : files: op(13): [started] processing unit "containerd.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(13): op(14): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(13): op(14): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(13): [finished] processing unit "containerd.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(15): [started] processing unit "prepare-helm.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(17): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(17): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(18): [started] setting preset to enabled for "packet-phone-home.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(18): [finished] setting preset to enabled for "packet-phone-home.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 02:25:00.838593 ignition[1050]: INFO : files: files passed Dec 13 02:25:00.838593 ignition[1050]: INFO : POST message to Packet Timeline Dec 13 02:25:00.838593 ignition[1050]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:25:01.640634 ignition[1050]: INFO : GET result: OK Dec 13 02:25:01.983521 ignition[1050]: INFO : Ignition finished successfully Dec 13 02:25:01.986532 systemd[1]: Finished ignition-files.service. Dec 13 02:25:02.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.007829 systemd[1]: Starting initrd-setup-root-after-ignition.service... Dec 13 02:25:02.078573 kernel: audit: type=1130 audit(1734056702.000:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.068571 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Dec 13 02:25:02.103512 initrd-setup-root-after-ignition[1085]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 02:25:02.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.068948 systemd[1]: Starting ignition-quench.service... Dec 13 02:25:02.294649 kernel: audit: type=1130 audit(1734056702.112:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.294665 kernel: audit: type=1130 audit(1734056702.180:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.294672 kernel: audit: type=1131 audit(1734056702.180:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.180000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.180000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.085741 systemd[1]: Finished initrd-setup-root-after-ignition.service. Dec 13 02:25:02.113777 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 02:25:02.113840 systemd[1]: Finished ignition-quench.service. Dec 13 02:25:02.450068 kernel: audit: type=1130 audit(1734056702.334:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.450080 kernel: audit: type=1131 audit(1734056702.334:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.181578 systemd[1]: Reached target ignition-complete.target. Dec 13 02:25:02.303978 systemd[1]: Starting initrd-parse-etc.service... Dec 13 02:25:02.317711 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 02:25:02.555304 kernel: audit: type=1130 audit(1734056702.496:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.317752 systemd[1]: Finished initrd-parse-etc.service. Dec 13 02:25:02.335620 systemd[1]: Reached target initrd-fs.target. Dec 13 02:25:02.458521 systemd[1]: Reached target initrd.target. Dec 13 02:25:02.458656 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Dec 13 02:25:02.459005 systemd[1]: Starting dracut-pre-pivot.service... Dec 13 02:25:02.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.479727 systemd[1]: Finished dracut-pre-pivot.service. Dec 13 02:25:02.702519 kernel: audit: type=1131 audit(1734056702.626:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.497891 systemd[1]: Starting initrd-cleanup.service... Dec 13 02:25:02.565504 systemd[1]: Stopped target nss-lookup.target. Dec 13 02:25:02.578649 systemd[1]: Stopped target remote-cryptsetup.target. Dec 13 02:25:02.594691 systemd[1]: Stopped target timers.target. Dec 13 02:25:02.608611 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 02:25:02.608710 systemd[1]: Stopped dracut-pre-pivot.service. Dec 13 02:25:02.627790 systemd[1]: Stopped target initrd.target. Dec 13 02:25:02.695660 systemd[1]: Stopped target basic.target. Dec 13 02:25:02.709474 systemd[1]: Stopped target ignition-complete.target. Dec 13 02:25:02.724625 systemd[1]: Stopped target ignition-diskful.target. Dec 13 02:25:02.739625 systemd[1]: Stopped target initrd-root-device.target. Dec 13 02:25:02.862000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.755645 systemd[1]: Stopped target remote-fs.target. Dec 13 02:25:02.948530 kernel: audit: type=1131 audit(1734056702.862:48): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.770624 systemd[1]: Stopped target remote-fs-pre.target. Dec 13 02:25:02.956000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.786887 systemd[1]: Stopped target sysinit.target. Dec 13 02:25:03.033539 kernel: audit: type=1131 audit(1734056702.956:49): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:03.025000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.801906 systemd[1]: Stopped target local-fs.target. Dec 13 02:25:02.816993 systemd[1]: Stopped target local-fs-pre.target. Dec 13 02:25:02.832982 systemd[1]: Stopped target swap.target. Dec 13 02:25:02.847862 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 02:25:02.848223 systemd[1]: Stopped dracut-pre-mount.service. Dec 13 02:25:02.864192 systemd[1]: Stopped target cryptsetup.target. Dec 13 02:25:02.941655 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 02:25:03.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.941738 systemd[1]: Stopped dracut-initqueue.service. Dec 13 02:25:03.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.957677 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 02:25:03.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:02.957738 systemd[1]: Stopped ignition-fetch-offline.service. Dec 13 02:25:03.181534 ignition[1100]: INFO : Ignition 2.14.0 Dec 13 02:25:03.181534 ignition[1100]: INFO : Stage: umount Dec 13 02:25:03.181534 ignition[1100]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Dec 13 02:25:03.181534 ignition[1100]: DEBUG : parsing config with SHA512: 0131bd505bfe1b1215ca4ec9809701a3323bf448114294874f7249d8d300440bd742a7532f60673bfa0746c04de0bd5ca68d0fe9a8ecd59464b13a6401323cb4 Dec 13 02:25:03.181534 ignition[1100]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/packet" Dec 13 02:25:03.181534 ignition[1100]: INFO : umount: umount passed Dec 13 02:25:03.181534 ignition[1100]: INFO : POST message to Packet Timeline Dec 13 02:25:03.181534 ignition[1100]: INFO : GET https://metadata.packet.net/metadata: attempt #1 Dec 13 02:25:03.227000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:03.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:03.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:03.289000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:03.026748 systemd[1]: Stopped target paths.target. Dec 13 02:25:03.322000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:03.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:03.040542 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 02:25:03.044543 systemd[1]: Stopped systemd-ask-password-console.path. Dec 13 02:25:03.062680 systemd[1]: Stopped target slices.target. Dec 13 02:25:03.076669 systemd[1]: Stopped target sockets.target. Dec 13 02:25:03.093701 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 02:25:03.093800 systemd[1]: Closed iscsid.socket. Dec 13 02:25:03.108885 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 02:25:03.109124 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Dec 13 02:25:03.126057 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 02:25:03.126430 systemd[1]: Stopped ignition-files.service. Dec 13 02:25:03.141993 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Dec 13 02:25:03.142352 systemd[1]: Stopped flatcar-metadata-hostname.service. Dec 13 02:25:03.159332 systemd[1]: Stopping ignition-mount.service... Dec 13 02:25:03.173593 systemd[1]: Stopping iscsiuio.service... Dec 13 02:25:03.189060 systemd[1]: Stopping sysroot-boot.service... Dec 13 02:25:03.208512 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 02:25:03.208787 systemd[1]: Stopped systemd-udev-trigger.service. Dec 13 02:25:03.229013 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 02:25:03.229319 systemd[1]: Stopped dracut-pre-trigger.service. Dec 13 02:25:03.265196 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 02:25:03.266972 systemd[1]: iscsiuio.service: Deactivated successfully. Dec 13 02:25:03.267211 systemd[1]: Stopped iscsiuio.service. Dec 13 02:25:03.275722 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 02:25:03.276018 systemd[1]: Stopped sysroot-boot.service. Dec 13 02:25:03.291693 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 02:25:03.292028 systemd[1]: Closed iscsiuio.socket. Dec 13 02:25:03.306232 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 02:25:03.306464 systemd[1]: Finished initrd-cleanup.service. Dec 13 02:25:04.153478 ignition[1100]: INFO : GET result: OK Dec 13 02:25:04.496618 ignition[1100]: INFO : Ignition finished successfully Dec 13 02:25:04.498418 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 02:25:04.512000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.498566 systemd[1]: Stopped ignition-mount.service. Dec 13 02:25:04.514022 systemd[1]: Stopped target network.target. Dec 13 02:25:04.543000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.529521 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 02:25:04.559000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.529763 systemd[1]: Stopped ignition-disks.service. Dec 13 02:25:04.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.544707 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 02:25:04.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.544833 systemd[1]: Stopped ignition-kargs.service. Dec 13 02:25:04.560797 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 02:25:04.560948 systemd[1]: Stopped ignition-setup.service. Dec 13 02:25:04.641000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.576797 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 02:25:04.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.660000 audit: BPF prog-id=6 op=UNLOAD Dec 13 02:25:04.576940 systemd[1]: Stopped initrd-setup-root.service. Dec 13 02:25:04.593075 systemd[1]: Stopping systemd-networkd.service... Dec 13 02:25:04.604425 systemd-networkd[881]: enp1s0f1np1: DHCPv6 lease lost Dec 13 02:25:04.710000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.609837 systemd[1]: Stopping systemd-resolved.service... Dec 13 02:25:04.726000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.612468 systemd-networkd[881]: enp1s0f0np0: DHCPv6 lease lost Dec 13 02:25:04.734000 audit: BPF prog-id=9 op=UNLOAD Dec 13 02:25:04.743000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.626239 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 02:25:04.626501 systemd[1]: Stopped systemd-resolved.service. Dec 13 02:25:04.777000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.643400 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 02:25:04.643653 systemd[1]: Stopped systemd-networkd.service. Dec 13 02:25:04.660020 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 02:25:04.823000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.660114 systemd[1]: Closed systemd-networkd.socket. Dec 13 02:25:04.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.681166 systemd[1]: Stopping network-cleanup.service... Dec 13 02:25:04.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.694522 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 02:25:04.694673 systemd[1]: Stopped parse-ip-for-networkd.service. Dec 13 02:25:04.887000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.711720 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:25:04.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.711873 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:25:04.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.728029 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 02:25:04.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.934000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.728174 systemd[1]: Stopped systemd-modules-load.service. Dec 13 02:25:04.744934 systemd[1]: Stopping systemd-udevd.service... Dec 13 02:25:04.765389 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Dec 13 02:25:04.766863 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 02:25:04.767190 systemd[1]: Stopped systemd-udevd.service. Dec 13 02:25:04.781489 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 02:25:04.781622 systemd[1]: Closed systemd-udevd-control.socket. Dec 13 02:25:04.793670 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 02:25:04.793776 systemd[1]: Closed systemd-udevd-kernel.socket. Dec 13 02:25:04.809548 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 02:25:04.809744 systemd[1]: Stopped dracut-pre-udev.service. Dec 13 02:25:04.824743 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 02:25:05.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:04.824886 systemd[1]: Stopped dracut-cmdline.service. Dec 13 02:25:04.840673 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 02:25:05.081000 audit: BPF prog-id=8 op=UNLOAD Dec 13 02:25:05.081000 audit: BPF prog-id=7 op=UNLOAD Dec 13 02:25:05.081000 audit: BPF prog-id=5 op=UNLOAD Dec 13 02:25:05.081000 audit: BPF prog-id=4 op=UNLOAD Dec 13 02:25:05.081000 audit: BPF prog-id=3 op=UNLOAD Dec 13 02:25:04.840808 systemd[1]: Stopped dracut-cmdline-ask.service. Dec 13 02:25:05.151095 systemd-journald[266]: Received SIGTERM from PID 1 (n/a). Dec 13 02:25:05.151120 systemd-journald[266]: Failed to send stream file descriptor to service manager: Connection refused Dec 13 02:25:04.855922 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Dec 13 02:25:05.151161 iscsid[901]: iscsid shutting down. Dec 13 02:25:04.870383 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 02:25:04.870425 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Dec 13 02:25:04.888557 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 02:25:04.888607 systemd[1]: Stopped kmod-static-nodes.service. Dec 13 02:25:04.904546 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 02:25:04.904665 systemd[1]: Stopped systemd-vconsole-setup.service. Dec 13 02:25:04.923268 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Dec 13 02:25:04.924710 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 02:25:04.924926 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Dec 13 02:25:05.036313 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 02:25:05.036559 systemd[1]: Stopped network-cleanup.service. Dec 13 02:25:05.047877 systemd[1]: Reached target initrd-switch-root.target. Dec 13 02:25:05.066310 systemd[1]: Starting initrd-switch-root.service... Dec 13 02:25:05.080688 systemd[1]: Switching root. Dec 13 02:25:05.151500 systemd-journald[266]: Journal stopped Dec 13 02:25:08.970057 kernel: SELinux: Class mctp_socket not defined in policy. Dec 13 02:25:08.970071 kernel: SELinux: Class anon_inode not defined in policy. Dec 13 02:25:08.970080 kernel: SELinux: the above unknown classes and permissions will be allowed Dec 13 02:25:08.970086 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 02:25:08.970091 kernel: SELinux: policy capability open_perms=1 Dec 13 02:25:08.970096 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 02:25:08.970102 kernel: SELinux: policy capability always_check_network=0 Dec 13 02:25:08.970108 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 02:25:08.970114 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 02:25:08.970120 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 02:25:08.970126 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 02:25:08.970132 systemd[1]: Successfully loaded SELinux policy in 322.335ms. Dec 13 02:25:08.970141 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.076ms. Dec 13 02:25:08.970153 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Dec 13 02:25:08.970163 systemd[1]: Detected architecture x86-64. Dec 13 02:25:08.970171 systemd[1]: Detected first boot. Dec 13 02:25:08.970178 systemd[1]: Hostname set to . Dec 13 02:25:08.970184 systemd[1]: Initializing machine ID from random generator. Dec 13 02:25:08.970190 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Dec 13 02:25:08.970196 systemd[1]: Populated /etc with preset unit settings. Dec 13 02:25:08.970202 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:25:08.970210 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:25:08.970217 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:25:08.970223 systemd[1]: Queued start job for default target multi-user.target. Dec 13 02:25:08.970230 systemd[1]: Created slice system-addon\x2dconfig.slice. Dec 13 02:25:08.970236 systemd[1]: Created slice system-addon\x2drun.slice. Dec 13 02:25:08.970242 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Dec 13 02:25:08.970250 systemd[1]: Created slice system-getty.slice. Dec 13 02:25:08.970256 systemd[1]: Created slice system-modprobe.slice. Dec 13 02:25:08.970262 systemd[1]: Created slice system-serial\x2dgetty.slice. Dec 13 02:25:08.970269 systemd[1]: Created slice system-system\x2dcloudinit.slice. Dec 13 02:25:08.970275 systemd[1]: Created slice system-systemd\x2dfsck.slice. Dec 13 02:25:08.970281 systemd[1]: Created slice user.slice. Dec 13 02:25:08.970287 systemd[1]: Started systemd-ask-password-console.path. Dec 13 02:25:08.970297 systemd[1]: Started systemd-ask-password-wall.path. Dec 13 02:25:08.970304 systemd[1]: Set up automount boot.automount. Dec 13 02:25:08.970311 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Dec 13 02:25:08.970318 systemd[1]: Reached target integritysetup.target. Dec 13 02:25:08.970324 systemd[1]: Reached target remote-cryptsetup.target. Dec 13 02:25:08.970330 systemd[1]: Reached target remote-fs.target. Dec 13 02:25:08.970338 systemd[1]: Reached target slices.target. Dec 13 02:25:08.970344 systemd[1]: Reached target swap.target. Dec 13 02:25:08.970350 systemd[1]: Reached target torcx.target. Dec 13 02:25:08.970357 systemd[1]: Reached target veritysetup.target. Dec 13 02:25:08.970364 systemd[1]: Listening on systemd-coredump.socket. Dec 13 02:25:08.970371 systemd[1]: Listening on systemd-initctl.socket. Dec 13 02:25:08.970382 systemd[1]: Listening on systemd-journald-audit.socket. Dec 13 02:25:08.970393 kernel: kauditd_printk_skb: 49 callbacks suppressed Dec 13 02:25:08.970403 kernel: audit: type=1400 audit(1734056708.221:92): avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:25:08.970411 kernel: audit: type=1335 audit(1734056708.221:93): pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:25:08.970418 systemd[1]: Listening on systemd-journald-dev-log.socket. Dec 13 02:25:08.970425 systemd[1]: Listening on systemd-journald.socket. Dec 13 02:25:08.970432 systemd[1]: Listening on systemd-networkd.socket. Dec 13 02:25:08.970439 systemd[1]: Listening on systemd-udevd-control.socket. Dec 13 02:25:08.970445 systemd[1]: Listening on systemd-udevd-kernel.socket. Dec 13 02:25:08.970452 systemd[1]: Listening on systemd-userdbd.socket. Dec 13 02:25:08.970460 systemd[1]: Mounting dev-hugepages.mount... Dec 13 02:25:08.970466 systemd[1]: Mounting dev-mqueue.mount... Dec 13 02:25:08.970473 systemd[1]: Mounting media.mount... Dec 13 02:25:08.970480 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:08.970486 systemd[1]: Mounting sys-kernel-debug.mount... Dec 13 02:25:08.970493 systemd[1]: Mounting sys-kernel-tracing.mount... Dec 13 02:25:08.970499 systemd[1]: Mounting tmp.mount... Dec 13 02:25:08.970506 systemd[1]: Starting flatcar-tmpfiles.service... Dec 13 02:25:08.970512 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:25:08.970519 systemd[1]: Starting kmod-static-nodes.service... Dec 13 02:25:08.970526 systemd[1]: Starting modprobe@configfs.service... Dec 13 02:25:08.970532 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:25:08.970539 systemd[1]: Starting modprobe@drm.service... Dec 13 02:25:08.970546 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:25:08.970552 systemd[1]: Starting modprobe@fuse.service... Dec 13 02:25:08.970559 kernel: fuse: init (API version 7.34) Dec 13 02:25:08.970565 systemd[1]: Starting modprobe@loop.service... Dec 13 02:25:08.970573 kernel: loop: module loaded Dec 13 02:25:08.970585 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 02:25:08.970593 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Dec 13 02:25:08.970600 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Dec 13 02:25:08.970607 systemd[1]: Starting systemd-journald.service... Dec 13 02:25:08.970613 systemd[1]: Starting systemd-modules-load.service... Dec 13 02:25:08.970620 kernel: audit: type=1305 audit(1734056708.966:94): op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:25:08.970628 systemd-journald[1292]: Journal started Dec 13 02:25:08.970655 systemd-journald[1292]: Runtime Journal (/run/log/journal/36721a13522b47159cbb9fa25859def4) is 8.0M, max 640.1M, 632.1M free. Dec 13 02:25:08.221000 audit[1]: AVC avc: denied { audit_read } for pid=1 comm="systemd" capability=37 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Dec 13 02:25:08.221000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Dec 13 02:25:08.966000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Dec 13 02:25:08.966000 audit[1292]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdecd22ff0 a2=4000 a3=7ffdecd2308c items=0 ppid=1 pid=1292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:25:09.084206 kernel: audit: type=1300 audit(1734056708.966:94): arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffdecd22ff0 a2=4000 a3=7ffdecd2308c items=0 ppid=1 pid=1292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:25:09.084226 systemd[1]: Starting systemd-network-generator.service... Dec 13 02:25:09.084239 kernel: audit: type=1327 audit(1734056708.966:94): proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:25:08.966000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Dec 13 02:25:09.152349 systemd[1]: Starting systemd-remount-fs.service... Dec 13 02:25:09.180347 systemd[1]: Starting systemd-udev-trigger.service... Dec 13 02:25:09.224349 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:09.243347 systemd[1]: Started systemd-journald.service. Dec 13 02:25:09.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.252072 systemd[1]: Mounted dev-hugepages.mount. Dec 13 02:25:09.300513 kernel: audit: type=1130 audit(1734056709.250:95): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.306576 systemd[1]: Mounted dev-mqueue.mount. Dec 13 02:25:09.313558 systemd[1]: Mounted media.mount. Dec 13 02:25:09.320552 systemd[1]: Mounted sys-kernel-debug.mount. Dec 13 02:25:09.329566 systemd[1]: Mounted sys-kernel-tracing.mount. Dec 13 02:25:09.338646 systemd[1]: Mounted tmp.mount. Dec 13 02:25:09.345684 systemd[1]: Finished flatcar-tmpfiles.service. Dec 13 02:25:09.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.354691 systemd[1]: Finished kmod-static-nodes.service. Dec 13 02:25:09.403477 kernel: audit: type=1130 audit(1734056709.353:96): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.411649 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 02:25:09.411731 systemd[1]: Finished modprobe@configfs.service. Dec 13 02:25:09.461500 kernel: audit: type=1130 audit(1734056709.410:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.469649 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:25:09.469722 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:25:09.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.521344 kernel: audit: type=1130 audit(1734056709.468:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.521378 kernel: audit: type=1131 audit(1734056709.468:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.580000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.581656 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:25:09.581732 systemd[1]: Finished modprobe@drm.service. Dec 13 02:25:09.589000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.590671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:25:09.590745 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:25:09.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.599645 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 02:25:09.599719 systemd[1]: Finished modprobe@fuse.service. Dec 13 02:25:09.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.607000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.608663 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:25:09.608749 systemd[1]: Finished modprobe@loop.service. Dec 13 02:25:09.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.617688 systemd[1]: Finished systemd-modules-load.service. Dec 13 02:25:09.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.626712 systemd[1]: Finished systemd-network-generator.service. Dec 13 02:25:09.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.635681 systemd[1]: Finished systemd-remount-fs.service. Dec 13 02:25:09.643000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.644745 systemd[1]: Finished systemd-udev-trigger.service. Dec 13 02:25:09.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.653910 systemd[1]: Reached target network-pre.target. Dec 13 02:25:09.664243 systemd[1]: Mounting sys-fs-fuse-connections.mount... Dec 13 02:25:09.674635 systemd[1]: Mounting sys-kernel-config.mount... Dec 13 02:25:09.681533 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 02:25:09.682551 systemd[1]: Starting systemd-hwdb-update.service... Dec 13 02:25:09.689973 systemd[1]: Starting systemd-journal-flush.service... Dec 13 02:25:09.693404 systemd-journald[1292]: Time spent on flushing to /var/log/journal/36721a13522b47159cbb9fa25859def4 is 14.523ms for 1531 entries. Dec 13 02:25:09.693404 systemd-journald[1292]: System Journal (/var/log/journal/36721a13522b47159cbb9fa25859def4) is 8.0M, max 195.6M, 187.6M free. Dec 13 02:25:09.734689 systemd-journald[1292]: Received client request to flush runtime journal. Dec 13 02:25:09.706439 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:25:09.706964 systemd[1]: Starting systemd-random-seed.service... Dec 13 02:25:09.724428 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:25:09.725022 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:25:09.731958 systemd[1]: Starting systemd-sysusers.service... Dec 13 02:25:09.739013 systemd[1]: Starting systemd-udev-settle.service... Dec 13 02:25:09.746615 systemd[1]: Mounted sys-fs-fuse-connections.mount. Dec 13 02:25:09.754487 systemd[1]: Mounted sys-kernel-config.mount. Dec 13 02:25:09.762565 systemd[1]: Finished systemd-journal-flush.service. Dec 13 02:25:09.769000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.770563 systemd[1]: Finished systemd-random-seed.service. Dec 13 02:25:09.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.778515 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:25:09.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.786515 systemd[1]: Finished systemd-sysusers.service. Dec 13 02:25:09.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.795448 systemd[1]: Reached target first-boot-complete.target. Dec 13 02:25:09.804074 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Dec 13 02:25:09.813499 udevadm[1320]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 02:25:09.822145 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Dec 13 02:25:09.829000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.985300 systemd[1]: Finished systemd-hwdb-update.service. Dec 13 02:25:09.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:09.994284 systemd[1]: Starting systemd-udevd.service... Dec 13 02:25:10.006420 systemd-udevd[1327]: Using default interface naming scheme 'v252'. Dec 13 02:25:10.023380 systemd[1]: Started systemd-udevd.service. Dec 13 02:25:10.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:10.034577 systemd[1]: Found device dev-ttyS1.device. Dec 13 02:25:10.066054 systemd[1]: Starting systemd-networkd.service... Dec 13 02:25:10.089816 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input2 Dec 13 02:25:10.089884 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1389) Dec 13 02:25:10.089904 kernel: ACPI: button: Sleep Button [SLPB] Dec 13 02:25:10.129560 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Dec 13 02:25:10.135320 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 02:25:10.135368 kernel: IPMI message handler: version 39.2 Dec 13 02:25:10.150000 systemd[1]: Starting systemd-userdbd.service... Dec 13 02:25:10.155297 kernel: ACPI: button: Power Button [PWRF] Dec 13 02:25:10.176299 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 02:25:10.091000 audit[1343]: AVC avc: denied { confidentiality } for pid=1343 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Dec 13 02:25:10.199298 kernel: ipmi device interface Dec 13 02:25:10.091000 audit[1343]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=7fae7cb95010 a1=4d98c a2=7fae7e848bc5 a3=5 items=42 ppid=1327 pid=1343 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:25:10.091000 audit: CWD cwd="/" Dec 13 02:25:10.091000 audit: PATH item=0 name=(null) inode=45 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=1 name=(null) inode=26992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=2 name=(null) inode=26992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=3 name=(null) inode=26993 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=4 name=(null) inode=26992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=5 name=(null) inode=26994 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=6 name=(null) inode=26992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=7 name=(null) inode=26995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=8 name=(null) inode=26995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=9 name=(null) inode=26996 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=10 name=(null) inode=26995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=11 name=(null) inode=26997 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=12 name=(null) inode=26995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=13 name=(null) inode=26998 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=14 name=(null) inode=26995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=15 name=(null) inode=26999 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=16 name=(null) inode=26995 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=17 name=(null) inode=27000 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=18 name=(null) inode=26992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=19 name=(null) inode=27001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=20 name=(null) inode=27001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=21 name=(null) inode=27002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=22 name=(null) inode=27001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=23 name=(null) inode=27003 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=24 name=(null) inode=27001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=25 name=(null) inode=27004 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=26 name=(null) inode=27001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=27 name=(null) inode=27005 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=28 name=(null) inode=27001 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=29 name=(null) inode=27006 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=30 name=(null) inode=26992 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=31 name=(null) inode=27007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=32 name=(null) inode=27007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=33 name=(null) inode=27008 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=34 name=(null) inode=27007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=35 name=(null) inode=27009 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=36 name=(null) inode=27007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=37 name=(null) inode=27010 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=38 name=(null) inode=27007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=39 name=(null) inode=27011 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=40 name=(null) inode=27007 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PATH item=41 name=(null) inode=27012 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Dec 13 02:25:10.091000 audit: PROCTITLE proctitle="(udev-worker)" Dec 13 02:25:10.222305 kernel: mei_me 0000:00:16.0: Device doesn't have valid ME Interface Dec 13 02:25:10.222457 kernel: mei_me 0000:00:16.4: Device doesn't have valid ME Interface Dec 13 02:25:10.243851 kernel: i801_smbus 0000:00:1f.4: SPD Write Disable is set Dec 13 02:25:10.354576 kernel: i801_smbus 0000:00:1f.4: SMBus using PCI interrupt Dec 13 02:25:10.354674 kernel: ipmi_si: IPMI System Interface driver Dec 13 02:25:10.354688 kernel: i2c i2c-0: 2/4 memory slots populated (from DMI) Dec 13 02:25:10.354768 kernel: ipmi_si dmi-ipmi-si.0: ipmi_platform: probing via SMBIOS Dec 13 02:25:10.354837 kernel: ipmi_platform: ipmi_si: SMBIOS: io 0xca2 regsize 1 spacing 1 irq 0 Dec 13 02:25:10.354850 kernel: ipmi_si: Adding SMBIOS-specified kcs state machine Dec 13 02:25:10.354874 kernel: ipmi_si IPI0001:00: ipmi_platform: probing via ACPI Dec 13 02:25:10.566413 kernel: ipmi_si IPI0001:00: ipmi_platform: [io 0x0ca2] regsize 1 spacing 1 irq 0 Dec 13 02:25:10.566494 kernel: iTCO_vendor_support: vendor-support=0 Dec 13 02:25:10.566507 kernel: ipmi_si dmi-ipmi-si.0: Removing SMBIOS-specified kcs state machine in favor of ACPI Dec 13 02:25:10.566567 kernel: ipmi_si: Adding ACPI-specified kcs state machine Dec 13 02:25:10.566578 kernel: ipmi_si: Trying ACPI-specified kcs state machine at i/o address 0xca2, slave address 0x20, irq 0 Dec 13 02:25:10.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:10.475208 systemd[1]: Started systemd-userdbd.service. Dec 13 02:25:10.636769 systemd-networkd[1405]: bond0: netdev ready Dec 13 02:25:10.638903 systemd-networkd[1405]: lo: Link UP Dec 13 02:25:10.638906 systemd-networkd[1405]: lo: Gained carrier Dec 13 02:25:10.639383 systemd-networkd[1405]: Enumeration completed Dec 13 02:25:10.639475 systemd[1]: Started systemd-networkd.service. Dec 13 02:25:10.639803 systemd-networkd[1405]: bond0: Configuring with /etc/systemd/network/05-bond0.network. Dec 13 02:25:10.642652 systemd-networkd[1405]: enp1s0f1np1: Configuring with /etc/systemd/network/10-1c:34:da:5c:28:81.network. Dec 13 02:25:10.661983 kernel: iTCO_wdt iTCO_wdt: Found a Intel PCH TCO device (Version=6, TCOBASE=0x0400) Dec 13 02:25:10.684070 kernel: ipmi_si IPI0001:00: The BMC does not support clearing the recv irq bit, compensating, but the BMC needs to be fixed. Dec 13 02:25:10.684164 kernel: iTCO_wdt iTCO_wdt: initialized. heartbeat=30 sec (nowayout=0) Dec 13 02:25:10.684222 kernel: ipmi_si IPI0001:00: IPMI message handler: Found new BMC (man_id: 0x002a7c, prod_id: 0x1b0f, dev_id: 0x20) Dec 13 02:25:10.718000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:10.759699 kernel: intel_rapl_common: Found RAPL domain package Dec 13 02:25:10.759732 kernel: intel_rapl_common: Found RAPL domain core Dec 13 02:25:10.779123 kernel: intel_rapl_common: Found RAPL domain dram Dec 13 02:25:10.895297 kernel: ipmi_si IPI0001:00: IPMI kcs interface initialized Dec 13 02:25:10.916350 kernel: ipmi_ssif: IPMI SSIF Interface driver Dec 13 02:25:10.919600 systemd[1]: Finished systemd-udev-settle.service. Dec 13 02:25:10.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:10.928119 systemd[1]: Starting lvm2-activation-early.service... Dec 13 02:25:10.944442 lvm[1431]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:25:10.975701 systemd[1]: Finished lvm2-activation-early.service. Dec 13 02:25:10.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:10.985423 systemd[1]: Reached target cryptsetup.target. Dec 13 02:25:10.994992 systemd[1]: Starting lvm2-activation.service... Dec 13 02:25:10.997173 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 02:25:11.030748 systemd[1]: Finished lvm2-activation.service. Dec 13 02:25:11.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.039531 systemd[1]: Reached target local-fs-pre.target. Dec 13 02:25:11.048361 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 02:25:11.048374 systemd[1]: Reached target local-fs.target. Dec 13 02:25:11.056328 systemd[1]: Reached target machines.target. Dec 13 02:25:11.066005 systemd[1]: Starting ldconfig.service... Dec 13 02:25:11.074180 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:25:11.074200 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:25:11.074807 systemd[1]: Starting systemd-boot-update.service... Dec 13 02:25:11.082910 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Dec 13 02:25:11.093918 systemd[1]: Starting systemd-machine-id-commit.service... Dec 13 02:25:11.094748 systemd[1]: Starting systemd-sysext.service... Dec 13 02:25:11.094933 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1437 (bootctl) Dec 13 02:25:11.095661 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Dec 13 02:25:11.103447 systemd[1]: Unmounting usr-share-oem.mount... Dec 13 02:25:11.116426 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Dec 13 02:25:11.115000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.116604 systemd[1]: usr-share-oem.mount: Deactivated successfully. Dec 13 02:25:11.116736 systemd[1]: Unmounted usr-share-oem.mount. Dec 13 02:25:11.166334 kernel: loop0: detected capacity change from 0 to 211296 Dec 13 02:25:11.233394 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 02:25:11.259119 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 02:25:11.259295 kernel: bond0: (slave enp1s0f1np1): Enslaving as a backup interface with an up link Dec 13 02:25:11.259318 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 02:25:11.259510 systemd[1]: Finished systemd-machine-id-commit.service. Dec 13 02:25:11.259969 systemd-networkd[1405]: enp1s0f0np0: Configuring with /etc/systemd/network/10-1c:34:da:5c:28:80.network. Dec 13 02:25:11.279295 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 02:25:11.300533 systemd-fsck[1451]: fsck.fat 4.2 (2021-01-31) Dec 13 02:25:11.300533 systemd-fsck[1451]: /dev/sda1: 789 files, 119291/258078 clusters Dec 13 02:25:11.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.309598 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Dec 13 02:25:11.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.321356 systemd[1]: Mounting boot.mount... Dec 13 02:25:11.332440 systemd[1]: Mounted boot.mount. Dec 13 02:25:11.350295 kernel: loop1: detected capacity change from 0 to 211296 Dec 13 02:25:11.366443 (sd-sysext)[1456]: Using extensions 'kubernetes'. Dec 13 02:25:11.366621 (sd-sysext)[1456]: Merged extensions into '/usr'. Dec 13 02:25:11.367932 systemd[1]: Finished systemd-boot-update.service. Dec 13 02:25:11.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.382265 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:11.385404 systemd[1]: Mounting usr-share-oem.mount... Dec 13 02:25:11.392505 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:25:11.393249 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:25:11.400985 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:25:11.418035 systemd[1]: Starting modprobe@loop.service... Dec 13 02:25:11.426353 kernel: bond0: Warning: No 802.3ad response from the link partner for any adapters in the bond Dec 13 02:25:11.426391 kernel: mlx5_core 0000:01:00.0 enp1s0f0np0: Link up Dec 13 02:25:11.459362 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:25:11.459434 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:25:11.459510 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:11.461486 systemd[1]: Mounted usr-share-oem.mount. Dec 13 02:25:11.470350 kernel: bond0: (slave enp1s0f0np0): Enslaving as a backup interface with an up link Dec 13 02:25:11.470374 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): bond0: link becomes ready Dec 13 02:25:11.482503 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:25:11.482612 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:25:11.489629 systemd-networkd[1405]: bond0: Link UP Dec 13 02:25:11.489833 systemd-networkd[1405]: enp1s0f1np1: Link UP Dec 13 02:25:11.489970 systemd-networkd[1405]: enp1s0f1np1: Gained carrier Dec 13 02:25:11.490966 systemd-networkd[1405]: enp1s0f1np1: Reconfiguring with /etc/systemd/network/10-1c:34:da:5c:28:80.network. Dec 13 02:25:11.500231 ldconfig[1436]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 02:25:11.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.503000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.504676 systemd[1]: Finished ldconfig.service. Dec 13 02:25:11.513334 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 02:25:11.513364 kernel: bond0: active interface up! Dec 13 02:25:11.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.544589 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:25:11.544668 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:25:11.550339 kernel: bond0: (slave enp1s0f0np0): link status definitely up, 10000 Mbps full duplex Dec 13 02:25:11.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.556000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.557570 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:25:11.557655 systemd[1]: Finished modprobe@loop.service. Dec 13 02:25:11.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.565574 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:25:11.565636 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:25:11.566159 systemd[1]: Finished systemd-sysext.service. Dec 13 02:25:11.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.576113 systemd[1]: Starting ensure-sysext.service... Dec 13 02:25:11.583913 systemd[1]: Starting systemd-tmpfiles-setup.service... Dec 13 02:25:11.589608 systemd-tmpfiles[1474]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Dec 13 02:25:11.590998 systemd-tmpfiles[1474]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 02:25:11.592069 systemd-tmpfiles[1474]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 02:25:11.594435 systemd[1]: Reloading. Dec 13 02:25:11.613900 /usr/lib/systemd/system-generators/torcx-generator[1494]: time="2024-12-13T02:25:11Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:25:11.613925 /usr/lib/systemd/system-generators/torcx-generator[1494]: time="2024-12-13T02:25:11Z" level=info msg="torcx already run" Dec 13 02:25:11.632260 systemd-networkd[1405]: enp1s0f0np0: Link UP Dec 13 02:25:11.632438 systemd-networkd[1405]: bond0: Gained carrier Dec 13 02:25:11.632526 systemd-networkd[1405]: enp1s0f0np0: Gained carrier Dec 13 02:25:11.649700 systemd-networkd[1405]: enp1s0f1np1: Link DOWN Dec 13 02:25:11.649703 systemd-networkd[1405]: enp1s0f1np1: Lost carrier Dec 13 02:25:11.687885 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:25:11.687893 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:25:11.695662 kernel: bond0: (slave enp1s0f1np1): link status down for interface, disabling it in 200 ms Dec 13 02:25:11.695688 kernel: bond0: (slave enp1s0f1np1): invalid new link 1 on slave Dec 13 02:25:11.700150 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:25:11.740729 systemd[1]: Finished systemd-tmpfiles-setup.service. Dec 13 02:25:11.748000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Dec 13 02:25:11.750972 systemd[1]: Starting audit-rules.service... Dec 13 02:25:11.758001 systemd[1]: Starting clean-ca-certificates.service... Dec 13 02:25:11.765000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Dec 13 02:25:11.765000 audit[1578]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc443e9670 a2=420 a3=0 items=0 ppid=1561 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Dec 13 02:25:11.765000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Dec 13 02:25:11.766920 augenrules[1578]: No rules Dec 13 02:25:11.767091 systemd[1]: Starting systemd-journal-catalog-update.service... Dec 13 02:25:11.776247 systemd[1]: Starting systemd-resolved.service... Dec 13 02:25:11.784206 systemd[1]: Starting systemd-timesyncd.service... Dec 13 02:25:11.791952 systemd[1]: Starting systemd-update-utmp.service... Dec 13 02:25:11.798717 systemd[1]: Finished audit-rules.service. Dec 13 02:25:11.807590 systemd[1]: Finished clean-ca-certificates.service. Dec 13 02:25:11.816478 systemd[1]: Finished systemd-journal-catalog-update.service. Dec 13 02:25:11.829306 systemd[1]: Starting systemd-update-done.service... Dec 13 02:25:11.844296 kernel: mlx5_core 0000:01:00.1 enp1s0f1np1: Link up Dec 13 02:25:11.857385 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:25:11.857902 systemd[1]: Finished systemd-update-done.service. Dec 13 02:25:11.864294 kernel: bond0: (slave enp1s0f1np1): speed changed to 0 on port 1 Dec 13 02:25:11.865575 systemd-networkd[1405]: enp1s0f1np1: Link UP Dec 13 02:25:11.865729 systemd-networkd[1405]: enp1s0f1np1: Gained carrier Dec 13 02:25:11.873849 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:25:11.874506 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:25:11.881959 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:25:11.888891 systemd[1]: Starting modprobe@loop.service... Dec 13 02:25:11.894620 systemd-resolved[1585]: Positive Trust Anchors: Dec 13 02:25:11.894626 systemd-resolved[1585]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 02:25:11.894645 systemd-resolved[1585]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Dec 13 02:25:11.895409 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:25:11.895484 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:25:11.895542 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:25:11.896053 systemd[1]: Started systemd-timesyncd.service. Dec 13 02:25:11.898392 systemd-resolved[1585]: Using system hostname 'ci-3510.3.6-a-cefcb26589'. Dec 13 02:25:11.904630 systemd[1]: Started systemd-resolved.service. Dec 13 02:25:11.919640 systemd[1]: Finished systemd-update-utmp.service. Dec 13 02:25:11.924339 kernel: bond0: (slave enp1s0f1np1): link status up again after 200 ms Dec 13 02:25:11.940586 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:25:11.940667 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:25:11.945349 kernel: bond0: (slave enp1s0f1np1): link status definitely up, 10000 Mbps full duplex Dec 13 02:25:11.952586 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:25:11.952664 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:25:11.960566 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:25:11.960650 systemd[1]: Finished modprobe@loop.service. Dec 13 02:25:11.969586 systemd[1]: Reached target network.target. Dec 13 02:25:11.977420 systemd[1]: Reached target nss-lookup.target. Dec 13 02:25:11.985420 systemd[1]: Reached target time-set.target. Dec 13 02:25:11.993516 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Dec 13 02:25:11.994181 systemd[1]: Starting modprobe@dm_mod.service... Dec 13 02:25:12.001937 systemd[1]: Starting modprobe@efi_pstore.service... Dec 13 02:25:12.008906 systemd[1]: Starting modprobe@loop.service... Dec 13 02:25:12.015386 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Dec 13 02:25:12.015453 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:25:12.015513 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 02:25:12.016050 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 02:25:12.016130 systemd[1]: Finished modprobe@dm_mod.service. Dec 13 02:25:12.024556 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 02:25:12.024630 systemd[1]: Finished modprobe@efi_pstore.service. Dec 13 02:25:12.032554 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 02:25:12.032626 systemd[1]: Finished modprobe@loop.service. Dec 13 02:25:12.040499 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 02:25:12.040571 systemd[1]: Reached target sysinit.target. Dec 13 02:25:12.048407 systemd[1]: Started motdgen.path. Dec 13 02:25:12.055383 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Dec 13 02:25:12.065447 systemd[1]: Started logrotate.timer. Dec 13 02:25:12.072407 systemd[1]: Started mdadm.timer. Dec 13 02:25:12.079364 systemd[1]: Started systemd-tmpfiles-clean.timer. Dec 13 02:25:12.087344 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:25:12.087401 systemd[1]: Reached target paths.target. Dec 13 02:25:12.094359 systemd[1]: Reached target timers.target. Dec 13 02:25:12.101500 systemd[1]: Listening on dbus.socket. Dec 13 02:25:12.108898 systemd[1]: Starting docker.socket... Dec 13 02:25:12.116119 systemd[1]: Listening on sshd.socket. Dec 13 02:25:12.122393 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:25:12.122452 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Dec 13 02:25:12.124215 systemd[1]: Listening on docker.socket. Dec 13 02:25:12.132190 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 02:25:12.132248 systemd[1]: Reached target sockets.target. Dec 13 02:25:12.140516 systemd[1]: Reached target basic.target. Dec 13 02:25:12.147448 systemd[1]: System is tainted: cgroupsv1 Dec 13 02:25:12.147465 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:12.147517 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:25:12.147566 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Dec 13 02:25:12.148153 systemd[1]: Starting containerd.service... Dec 13 02:25:12.155819 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Dec 13 02:25:12.164933 systemd[1]: Starting coreos-metadata.service... Dec 13 02:25:12.171985 systemd[1]: Starting dbus.service... Dec 13 02:25:12.178284 systemd[1]: Starting enable-oem-cloudinit.service... Dec 13 02:25:12.182560 jq[1618]: false Dec 13 02:25:12.186370 systemd[1]: Starting extend-filesystems.service... Dec 13 02:25:12.186579 coreos-metadata[1611]: Dec 13 02:25:12.186 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 02:25:12.190933 dbus-daemon[1617]: [system] SELinux support is enabled Dec 13 02:25:12.193416 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Dec 13 02:25:12.194189 extend-filesystems[1620]: Found loop1 Dec 13 02:25:12.215414 extend-filesystems[1620]: Found sda Dec 13 02:25:12.215414 extend-filesystems[1620]: Found sda1 Dec 13 02:25:12.215414 extend-filesystems[1620]: Found sda2 Dec 13 02:25:12.215414 extend-filesystems[1620]: Found sda3 Dec 13 02:25:12.215414 extend-filesystems[1620]: Found usr Dec 13 02:25:12.215414 extend-filesystems[1620]: Found sda4 Dec 13 02:25:12.215414 extend-filesystems[1620]: Found sda6 Dec 13 02:25:12.215414 extend-filesystems[1620]: Found sda7 Dec 13 02:25:12.215414 extend-filesystems[1620]: Found sda9 Dec 13 02:25:12.215414 extend-filesystems[1620]: Checking size of /dev/sda9 Dec 13 02:25:12.215414 extend-filesystems[1620]: Resized partition /dev/sda9 Dec 13 02:25:12.339341 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 116605649 blocks Dec 13 02:25:12.194255 systemd[1]: Starting modprobe@drm.service... Dec 13 02:25:12.339423 coreos-metadata[1614]: Dec 13 02:25:12.194 INFO Fetching https://metadata.packet.net/metadata: Attempt #1 Dec 13 02:25:12.339557 extend-filesystems[1631]: resize2fs 1.46.5 (30-Dec-2021) Dec 13 02:25:12.202189 systemd[1]: Starting motdgen.service... Dec 13 02:25:12.231410 systemd[1]: Starting prepare-helm.service... Dec 13 02:25:12.247276 systemd[1]: Starting ssh-key-proc-cmdline.service... Dec 13 02:25:12.255103 systemd[1]: Starting sshd-keygen.service... Dec 13 02:25:12.278151 systemd[1]: Starting systemd-networkd-wait-online.service... Dec 13 02:25:12.284515 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Dec 13 02:25:12.285223 systemd[1]: Starting tcsd.service... Dec 13 02:25:12.312124 systemd[1]: Starting update-engine.service... Dec 13 02:25:12.356861 jq[1655]: true Dec 13 02:25:12.331079 systemd[1]: Starting update-ssh-keys-after-ignition.service... Dec 13 02:25:12.347365 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 02:25:12.348967 systemd[1]: Started dbus.service. Dec 13 02:25:12.358898 update_engine[1654]: I1213 02:25:12.358505 1654 main.cc:92] Flatcar Update Engine starting Dec 13 02:25:12.361578 update_engine[1654]: I1213 02:25:12.361541 1654 update_check_scheduler.cc:74] Next update check in 8m29s Dec 13 02:25:12.365284 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 02:25:12.365410 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Dec 13 02:25:12.365684 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 02:25:12.365764 systemd[1]: Finished modprobe@drm.service. Dec 13 02:25:12.374558 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 02:25:12.374676 systemd[1]: Finished motdgen.service. Dec 13 02:25:12.381957 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 02:25:12.382074 systemd[1]: Finished ssh-key-proc-cmdline.service. Dec 13 02:25:12.393452 jq[1662]: true Dec 13 02:25:12.393687 systemd[1]: Finished ensure-sysext.service. Dec 13 02:25:12.401619 systemd[1]: tcsd.service: Skipped due to 'exec-condition'. Dec 13 02:25:12.401748 systemd[1]: Condition check resulted in tcsd.service being skipped. Dec 13 02:25:12.402500 tar[1660]: linux-amd64/helm Dec 13 02:25:12.404761 env[1663]: time="2024-12-13T02:25:12.404735933Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Dec 13 02:25:12.409455 systemd[1]: Started update-engine.service. Dec 13 02:25:12.413587 env[1663]: time="2024-12-13T02:25:12.413568365Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 02:25:12.419448 env[1663]: time="2024-12-13T02:25:12.419408682Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:12.420048 systemd[1]: Started locksmithd.service. Dec 13 02:25:12.420104 env[1663]: time="2024-12-13T02:25:12.420048138Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.173-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:25:12.420104 env[1663]: time="2024-12-13T02:25:12.420066876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:12.421955 env[1663]: time="2024-12-13T02:25:12.421938069Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:25:12.421983 env[1663]: time="2024-12-13T02:25:12.421955753Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:12.421983 env[1663]: time="2024-12-13T02:25:12.421968568Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Dec 13 02:25:12.421983 env[1663]: time="2024-12-13T02:25:12.421978693Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:12.422060 env[1663]: time="2024-12-13T02:25:12.422049135Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:12.424104 env[1663]: time="2024-12-13T02:25:12.424091527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 02:25:12.424240 env[1663]: time="2024-12-13T02:25:12.424224770Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 02:25:12.424264 env[1663]: time="2024-12-13T02:25:12.424241683Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 02:25:12.424299 env[1663]: time="2024-12-13T02:25:12.424283740Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Dec 13 02:25:12.424334 env[1663]: time="2024-12-13T02:25:12.424300899Z" level=info msg="metadata content store policy set" policy=shared Dec 13 02:25:12.426370 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 02:25:12.426387 systemd[1]: Reached target system-config.target. Dec 13 02:25:12.430067 bash[1698]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:25:12.433388 env[1663]: time="2024-12-13T02:25:12.433372753Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 02:25:12.433425 env[1663]: time="2024-12-13T02:25:12.433394320Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 02:25:12.433425 env[1663]: time="2024-12-13T02:25:12.433407297Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 02:25:12.433473 env[1663]: time="2024-12-13T02:25:12.433430800Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433473 env[1663]: time="2024-12-13T02:25:12.433439695Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433473 env[1663]: time="2024-12-13T02:25:12.433450051Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433473 env[1663]: time="2024-12-13T02:25:12.433457640Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433473 env[1663]: time="2024-12-13T02:25:12.433466180Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433473 env[1663]: time="2024-12-13T02:25:12.433473172Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433565 env[1663]: time="2024-12-13T02:25:12.433480649Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433565 env[1663]: time="2024-12-13T02:25:12.433487086Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433565 env[1663]: time="2024-12-13T02:25:12.433495165Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 02:25:12.433565 env[1663]: time="2024-12-13T02:25:12.433544011Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 02:25:12.433639 env[1663]: time="2024-12-13T02:25:12.433589066Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 02:25:12.433789 env[1663]: time="2024-12-13T02:25:12.433772712Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 02:25:12.433811 env[1663]: time="2024-12-13T02:25:12.433801208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433834 env[1663]: time="2024-12-13T02:25:12.433814408Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 02:25:12.433864 env[1663]: time="2024-12-13T02:25:12.433855526Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433884 env[1663]: time="2024-12-13T02:25:12.433867777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433884 env[1663]: time="2024-12-13T02:25:12.433880019Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433917 env[1663]: time="2024-12-13T02:25:12.433890461Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433917 env[1663]: time="2024-12-13T02:25:12.433901083Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433917 env[1663]: time="2024-12-13T02:25:12.433912235Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433960 env[1663]: time="2024-12-13T02:25:12.433922673Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433960 env[1663]: time="2024-12-13T02:25:12.433933170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.433960 env[1663]: time="2024-12-13T02:25:12.433945903Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 02:25:12.434046 env[1663]: time="2024-12-13T02:25:12.434036734Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.434075 env[1663]: time="2024-12-13T02:25:12.434050391Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.434075 env[1663]: time="2024-12-13T02:25:12.434062352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.434111 env[1663]: time="2024-12-13T02:25:12.434073088Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 02:25:12.434111 env[1663]: time="2024-12-13T02:25:12.434085322Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Dec 13 02:25:12.434111 env[1663]: time="2024-12-13T02:25:12.434096320Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 02:25:12.434165 env[1663]: time="2024-12-13T02:25:12.434110971Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Dec 13 02:25:12.434165 env[1663]: time="2024-12-13T02:25:12.434139098Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 02:25:12.434347 env[1663]: time="2024-12-13T02:25:12.434303840Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.434358079Z" level=info msg="Connect containerd service" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.434384454Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.434755223Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.434842056Z" level=info msg="Start subscribing containerd event" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.434863245Z" level=info msg="Start recovering state" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.434906723Z" level=info msg="Start event monitor" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.434909996Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.435107079Z" level=info msg="Start snapshots syncer" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.435118011Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.435131087Z" level=info msg="Start cni network conf syncer for default" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.435147099Z" level=info msg="Start streaming server" Dec 13 02:25:12.436467 env[1663]: time="2024-12-13T02:25:12.435165436Z" level=info msg="containerd successfully booted in 0.030850s" Dec 13 02:25:12.435602 systemd[1]: Starting systemd-logind.service... Dec 13 02:25:12.442375 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 02:25:12.442397 systemd[1]: Reached target user-config.target. Dec 13 02:25:12.447154 sshd_keygen[1651]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 02:25:12.450507 systemd[1]: Started containerd.service. Dec 13 02:25:12.457568 systemd[1]: Finished update-ssh-keys-after-ignition.service. Dec 13 02:25:12.461335 systemd-logind[1706]: Watching system buttons on /dev/input/event3 (Power Button) Dec 13 02:25:12.461350 systemd-logind[1706]: Watching system buttons on /dev/input/event2 (Sleep Button) Dec 13 02:25:12.461366 systemd-logind[1706]: Watching system buttons on /dev/input/event0 (HID 0557:2419) Dec 13 02:25:12.461500 systemd-logind[1706]: New seat seat0. Dec 13 02:25:12.467645 systemd[1]: Finished sshd-keygen.service. Dec 13 02:25:12.475571 systemd[1]: Started systemd-logind.service. Dec 13 02:25:12.481674 locksmithd[1701]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 02:25:12.484301 systemd[1]: Starting issuegen.service... Dec 13 02:25:12.491600 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 02:25:12.491713 systemd[1]: Finished issuegen.service. Dec 13 02:25:12.499260 systemd[1]: Starting systemd-user-sessions.service... Dec 13 02:25:12.507655 systemd[1]: Finished systemd-user-sessions.service. Dec 13 02:25:12.516216 systemd[1]: Started getty@tty1.service. Dec 13 02:25:12.524152 systemd[1]: Started serial-getty@ttyS1.service. Dec 13 02:25:12.532492 systemd[1]: Reached target getty.target. Dec 13 02:25:12.649442 systemd-networkd[1405]: bond0: Gained IPv6LL Dec 13 02:25:12.652478 tar[1660]: linux-amd64/LICENSE Dec 13 02:25:12.652534 tar[1660]: linux-amd64/README.md Dec 13 02:25:12.655295 systemd[1]: Finished prepare-helm.service. Dec 13 02:25:12.711322 kernel: EXT4-fs (sda9): resized filesystem to 116605649 Dec 13 02:25:12.739400 extend-filesystems[1631]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Dec 13 02:25:12.739400 extend-filesystems[1631]: old_desc_blocks = 1, new_desc_blocks = 56 Dec 13 02:25:12.739400 extend-filesystems[1631]: The filesystem on /dev/sda9 is now 116605649 (4k) blocks long. Dec 13 02:25:12.776486 extend-filesystems[1620]: Resized filesystem in /dev/sda9 Dec 13 02:25:12.776486 extend-filesystems[1620]: Found sdb Dec 13 02:25:12.739821 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 02:25:12.739940 systemd[1]: Finished extend-filesystems.service. Dec 13 02:25:13.482840 systemd[1]: Finished systemd-networkd-wait-online.service. Dec 13 02:25:13.492672 systemd[1]: Reached target network-online.target. Dec 13 02:25:13.501450 systemd[1]: Starting kubelet.service... Dec 13 02:25:14.095378 kernel: mlx5_core 0000:01:00.0: lag map port 1:1 port 2:2 shared_fdb:0 Dec 13 02:25:14.252499 systemd[1]: Started kubelet.service. Dec 13 02:25:14.972533 kubelet[1750]: E1213 02:25:14.972463 1750 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:25:14.973956 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:25:14.974057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:25:17.546601 login[1730]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:25:17.553042 login[1729]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 02:25:17.568856 systemd-logind[1706]: New session 1 of user core. Dec 13 02:25:17.569447 systemd[1]: Created slice user-500.slice. Dec 13 02:25:17.569966 systemd[1]: Starting user-runtime-dir@500.service... Dec 13 02:25:17.571329 systemd-logind[1706]: New session 2 of user core. Dec 13 02:25:17.576022 systemd[1]: Finished user-runtime-dir@500.service. Dec 13 02:25:17.576651 systemd[1]: Starting user@500.service... Dec 13 02:25:17.578696 (systemd)[1771]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:17.647738 systemd[1771]: Queued start job for default target default.target. Dec 13 02:25:17.647840 systemd[1771]: Reached target paths.target. Dec 13 02:25:17.647851 systemd[1771]: Reached target sockets.target. Dec 13 02:25:17.647859 systemd[1771]: Reached target timers.target. Dec 13 02:25:17.647866 systemd[1771]: Reached target basic.target. Dec 13 02:25:17.647885 systemd[1771]: Reached target default.target. Dec 13 02:25:17.647898 systemd[1771]: Startup finished in 66ms. Dec 13 02:25:17.647959 systemd[1]: Started user@500.service. Dec 13 02:25:17.648472 systemd[1]: Started session-1.scope. Dec 13 02:25:17.648763 systemd[1]: Started session-2.scope. Dec 13 02:25:18.026705 coreos-metadata[1611]: Dec 13 02:25:18.026 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 02:25:18.027619 coreos-metadata[1614]: Dec 13 02:25:18.026 INFO Failed to fetch: error sending request for url (https://metadata.packet.net/metadata): error trying to connect: dns error: failed to lookup address information: Name or service not known Dec 13 02:25:19.026924 coreos-metadata[1611]: Dec 13 02:25:19.026 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 02:25:19.027854 coreos-metadata[1614]: Dec 13 02:25:19.026 INFO Fetching https://metadata.packet.net/metadata: Attempt #2 Dec 13 02:25:19.575604 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:2 port 2:2 Dec 13 02:25:19.575763 kernel: mlx5_core 0000:01:00.0: modify lag map port 1:1 port 2:2 Dec 13 02:25:20.321983 systemd[1]: Created slice system-sshd.slice. Dec 13 02:25:20.325117 systemd[1]: Started sshd@0-139.178.70.53:22-139.178.68.195:57796.service. Dec 13 02:25:19.620500 systemd-resolved[1585]: Clock change detected. Flushing caches. Dec 13 02:25:19.640980 systemd-journald[1292]: Time jumped backwards, rotating. Dec 13 02:25:19.620813 systemd-timesyncd[1587]: Contacted time server 135.148.100.14:123 (0.flatcar.pool.ntp.org). Dec 13 02:25:19.620981 systemd-timesyncd[1587]: Initial clock synchronization to Fri 2024-12-13 02:25:19.620384 UTC. Dec 13 02:25:19.657517 sshd[1794]: Accepted publickey for core from 139.178.68.195 port 57796 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:25:19.660829 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:19.671789 systemd-logind[1706]: New session 3 of user core. Dec 13 02:25:19.674189 systemd[1]: Started session-3.scope. Dec 13 02:25:19.727683 systemd[1]: Started sshd@1-139.178.70.53:22-139.178.68.195:57806.service. Dec 13 02:25:19.762322 sshd[1800]: Accepted publickey for core from 139.178.68.195 port 57806 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:25:19.763020 sshd[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:19.765397 systemd-logind[1706]: New session 4 of user core. Dec 13 02:25:19.765905 systemd[1]: Started session-4.scope. Dec 13 02:25:19.816404 sshd[1800]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:19.819010 systemd[1]: Started sshd@2-139.178.70.53:22-139.178.68.195:57820.service. Dec 13 02:25:19.819611 systemd[1]: sshd@1-139.178.70.53:22-139.178.68.195:57806.service: Deactivated successfully. Dec 13 02:25:19.820536 systemd-logind[1706]: Session 4 logged out. Waiting for processes to exit. Dec 13 02:25:19.820586 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 02:25:19.821725 systemd-logind[1706]: Removed session 4. Dec 13 02:25:19.854386 sshd[1806]: Accepted publickey for core from 139.178.68.195 port 57820 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:25:19.855223 sshd[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:19.858248 systemd-logind[1706]: New session 5 of user core. Dec 13 02:25:19.858864 systemd[1]: Started session-5.scope. Dec 13 02:25:19.913747 sshd[1806]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:19.915034 systemd[1]: sshd@2-139.178.70.53:22-139.178.68.195:57820.service: Deactivated successfully. Dec 13 02:25:19.915572 systemd-logind[1706]: Session 5 logged out. Waiting for processes to exit. Dec 13 02:25:19.915581 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 02:25:19.916215 systemd-logind[1706]: Removed session 5. Dec 13 02:25:20.214278 coreos-metadata[1614]: Dec 13 02:25:20.214 INFO Fetch successful Dec 13 02:25:20.291150 systemd[1]: Finished coreos-metadata.service. Dec 13 02:25:20.292026 systemd[1]: Started packet-phone-home.service. Dec 13 02:25:20.296992 curl[1819]: % Total % Received % Xferd Average Speed Time Time Time Current Dec 13 02:25:20.297105 curl[1819]: Dload Upload Total Spent Left Speed Dec 13 02:25:20.391146 coreos-metadata[1611]: Dec 13 02:25:20.391 INFO Fetch successful Dec 13 02:25:20.471400 unknown[1611]: wrote ssh authorized keys file for user: core Dec 13 02:25:20.483737 update-ssh-keys[1821]: Updated "/home/core/.ssh/authorized_keys" Dec 13 02:25:20.483993 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Dec 13 02:25:20.484182 systemd[1]: Reached target multi-user.target. Dec 13 02:25:20.484969 systemd[1]: Starting systemd-update-utmp-runlevel.service... Dec 13 02:25:20.488907 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Dec 13 02:25:20.489015 systemd[1]: Finished systemd-update-utmp-runlevel.service. Dec 13 02:25:20.489150 systemd[1]: Startup finished in 25.363s (kernel) + 15.869s (userspace) = 41.233s. Dec 13 02:25:20.613263 curl[1819]: \u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0\u000d 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 Dec 13 02:25:20.615853 systemd[1]: packet-phone-home.service: Deactivated successfully. Dec 13 02:25:24.395245 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 02:25:24.395813 systemd[1]: Stopped kubelet.service. Dec 13 02:25:24.399081 systemd[1]: Starting kubelet.service... Dec 13 02:25:24.586333 systemd[1]: Started kubelet.service. Dec 13 02:25:24.631981 kubelet[1836]: E1213 02:25:24.631956 1836 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:25:24.634308 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:25:24.634388 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:25:29.920988 systemd[1]: Started sshd@3-139.178.70.53:22-139.178.68.195:50024.service. Dec 13 02:25:29.953843 sshd[1858]: Accepted publickey for core from 139.178.68.195 port 50024 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:25:29.954525 sshd[1858]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:29.956798 systemd-logind[1706]: New session 6 of user core. Dec 13 02:25:29.957203 systemd[1]: Started session-6.scope. Dec 13 02:25:30.009396 sshd[1858]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:30.010833 systemd[1]: Started sshd@4-139.178.70.53:22-139.178.68.195:50026.service. Dec 13 02:25:30.011191 systemd[1]: sshd@3-139.178.70.53:22-139.178.68.195:50024.service: Deactivated successfully. Dec 13 02:25:30.011793 systemd-logind[1706]: Session 6 logged out. Waiting for processes to exit. Dec 13 02:25:30.011812 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 02:25:30.012260 systemd-logind[1706]: Removed session 6. Dec 13 02:25:30.044514 sshd[1863]: Accepted publickey for core from 139.178.68.195 port 50026 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:25:30.045195 sshd[1863]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:30.047775 systemd-logind[1706]: New session 7 of user core. Dec 13 02:25:30.048140 systemd[1]: Started session-7.scope. Dec 13 02:25:30.096387 sshd[1863]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:30.099179 systemd[1]: Started sshd@5-139.178.70.53:22-139.178.68.195:50030.service. Dec 13 02:25:30.099943 systemd[1]: sshd@4-139.178.70.53:22-139.178.68.195:50026.service: Deactivated successfully. Dec 13 02:25:30.101182 systemd-logind[1706]: Session 7 logged out. Waiting for processes to exit. Dec 13 02:25:30.101228 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 02:25:30.102406 systemd-logind[1706]: Removed session 7. Dec 13 02:25:30.177520 sshd[1870]: Accepted publickey for core from 139.178.68.195 port 50030 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:25:30.179483 sshd[1870]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:30.186073 systemd-logind[1706]: New session 8 of user core. Dec 13 02:25:30.187347 systemd[1]: Started session-8.scope. Dec 13 02:25:30.249127 sshd[1870]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:30.250575 systemd[1]: Started sshd@6-139.178.70.53:22-139.178.68.195:50038.service. Dec 13 02:25:30.250876 systemd[1]: sshd@5-139.178.70.53:22-139.178.68.195:50030.service: Deactivated successfully. Dec 13 02:25:30.251350 systemd-logind[1706]: Session 8 logged out. Waiting for processes to exit. Dec 13 02:25:30.251387 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 02:25:30.251840 systemd-logind[1706]: Removed session 8. Dec 13 02:25:30.284373 sshd[1877]: Accepted publickey for core from 139.178.68.195 port 50038 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:25:30.285272 sshd[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:25:30.288374 systemd-logind[1706]: New session 9 of user core. Dec 13 02:25:30.289022 systemd[1]: Started session-9.scope. Dec 13 02:25:30.372983 sudo[1883]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 02:25:30.373692 sudo[1883]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Dec 13 02:25:30.398852 systemd[1]: Starting docker.service... Dec 13 02:25:30.415933 env[1897]: time="2024-12-13T02:25:30.415906469Z" level=info msg="Starting up" Dec 13 02:25:30.416606 env[1897]: time="2024-12-13T02:25:30.416593725Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:25:30.416606 env[1897]: time="2024-12-13T02:25:30.416604820Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:25:30.416650 env[1897]: time="2024-12-13T02:25:30.416618028Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:25:30.416650 env[1897]: time="2024-12-13T02:25:30.416624962Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:25:30.417503 env[1897]: time="2024-12-13T02:25:30.417493509Z" level=info msg="parsed scheme: \"unix\"" module=grpc Dec 13 02:25:30.417503 env[1897]: time="2024-12-13T02:25:30.417502191Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Dec 13 02:25:30.417550 env[1897]: time="2024-12-13T02:25:30.417513039Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Dec 13 02:25:30.417550 env[1897]: time="2024-12-13T02:25:30.417519087Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Dec 13 02:25:30.564381 env[1897]: time="2024-12-13T02:25:30.564177256Z" level=warning msg="Your kernel does not support cgroup blkio weight" Dec 13 02:25:30.564381 env[1897]: time="2024-12-13T02:25:30.564228087Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Dec 13 02:25:30.564799 env[1897]: time="2024-12-13T02:25:30.564609797Z" level=info msg="Loading containers: start." Dec 13 02:25:30.827462 kernel: Initializing XFRM netlink socket Dec 13 02:25:30.866895 env[1897]: time="2024-12-13T02:25:30.866850416Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Dec 13 02:25:30.926957 systemd-networkd[1405]: docker0: Link UP Dec 13 02:25:30.952935 env[1897]: time="2024-12-13T02:25:30.952841242Z" level=info msg="Loading containers: done." Dec 13 02:25:30.972902 env[1897]: time="2024-12-13T02:25:30.972783603Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Dec 13 02:25:30.973227 env[1897]: time="2024-12-13T02:25:30.973178621Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Dec 13 02:25:30.973484 env[1897]: time="2024-12-13T02:25:30.973407718Z" level=info msg="Daemon has completed initialization" Dec 13 02:25:30.999596 systemd[1]: Started docker.service. Dec 13 02:25:31.015775 env[1897]: time="2024-12-13T02:25:31.015632242Z" level=info msg="API listen on /run/docker.sock" Dec 13 02:25:32.223528 env[1663]: time="2024-12-13T02:25:32.223394650Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\"" Dec 13 02:25:32.823884 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4160387892.mount: Deactivated successfully. Dec 13 02:25:34.093905 env[1663]: time="2024-12-13T02:25:34.093849709Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:34.094682 env[1663]: time="2024-12-13T02:25:34.094628543Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:34.095819 env[1663]: time="2024-12-13T02:25:34.095778384Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:34.096879 env[1663]: time="2024-12-13T02:25:34.096837125Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2804b1e7b9e08f3a3468f8fd2f6487c55968b9293ee51b9efb865b3298acfa26,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:34.097322 env[1663]: time="2024-12-13T02:25:34.097275207Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.12\" returns image reference \"sha256:92fbbe8caf9c923e0406b93c082b9e7af30032ace2d836c785633f90514bfefa\"" Dec 13 02:25:34.102785 env[1663]: time="2024-12-13T02:25:34.102714174Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\"" Dec 13 02:25:34.643933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 02:25:34.644060 systemd[1]: Stopped kubelet.service. Dec 13 02:25:34.644996 systemd[1]: Starting kubelet.service... Dec 13 02:25:34.830247 systemd[1]: Started kubelet.service. Dec 13 02:25:34.854979 kubelet[2068]: E1213 02:25:34.854950 2068 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 02:25:34.856221 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 02:25:34.856306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 02:25:35.695741 env[1663]: time="2024-12-13T02:25:35.695680129Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:35.696365 env[1663]: time="2024-12-13T02:25:35.696330031Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:35.698163 env[1663]: time="2024-12-13T02:25:35.698144108Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:35.699210 env[1663]: time="2024-12-13T02:25:35.699195813Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:e2f26a3f5ef3fd01f6330cab8b078cf303cfb6d36911a210d0915d535910e412,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:35.700145 env[1663]: time="2024-12-13T02:25:35.700130392Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.12\" returns image reference \"sha256:f3b58a53109c96b6bf82adb5973fefa4baec46e2e9ee200be5cc03f3afbf127d\"" Dec 13 02:25:35.707595 env[1663]: time="2024-12-13T02:25:35.707538333Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\"" Dec 13 02:25:36.771297 env[1663]: time="2024-12-13T02:25:36.771239844Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:36.772366 env[1663]: time="2024-12-13T02:25:36.772345558Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:36.774198 env[1663]: time="2024-12-13T02:25:36.774185415Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:36.775286 env[1663]: time="2024-12-13T02:25:36.775236506Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:ed66e2102f4705d45de7513decf3ac61879704984409323779d19e98b970568c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:36.775867 env[1663]: time="2024-12-13T02:25:36.775808626Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.12\" returns image reference \"sha256:e6d3373aa79026111619cc6cc1ffff8b27006c56422e7c95724b03a61b530eaf\"" Dec 13 02:25:36.784259 env[1663]: time="2024-12-13T02:25:36.784191154Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 02:25:37.680242 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541465423.mount: Deactivated successfully. Dec 13 02:25:38.042213 env[1663]: time="2024-12-13T02:25:38.042129210Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:38.042847 env[1663]: time="2024-12-13T02:25:38.042779361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:38.043396 env[1663]: time="2024-12-13T02:25:38.043354888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.29.12,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:38.044053 env[1663]: time="2024-12-13T02:25:38.044012027Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:38.044839 env[1663]: time="2024-12-13T02:25:38.044816432Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 02:25:38.050390 env[1663]: time="2024-12-13T02:25:38.050374417Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Dec 13 02:25:38.637804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2092176441.mount: Deactivated successfully. Dec 13 02:25:39.331810 env[1663]: time="2024-12-13T02:25:39.331756545Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:39.332438 env[1663]: time="2024-12-13T02:25:39.332378628Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:39.333573 env[1663]: time="2024-12-13T02:25:39.333524648Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:39.334457 env[1663]: time="2024-12-13T02:25:39.334411477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:39.334904 env[1663]: time="2024-12-13T02:25:39.334863554Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Dec 13 02:25:39.341191 env[1663]: time="2024-12-13T02:25:39.341161619Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Dec 13 02:25:39.858997 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount217361389.mount: Deactivated successfully. Dec 13 02:25:39.859927 env[1663]: time="2024-12-13T02:25:39.859886752Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:39.860551 env[1663]: time="2024-12-13T02:25:39.860518658Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:39.861226 env[1663]: time="2024-12-13T02:25:39.861194007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:39.861914 env[1663]: time="2024-12-13T02:25:39.861880365Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:39.862264 env[1663]: time="2024-12-13T02:25:39.862214304Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Dec 13 02:25:39.868508 env[1663]: time="2024-12-13T02:25:39.868490913Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Dec 13 02:25:40.437119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount238278179.mount: Deactivated successfully. Dec 13 02:25:42.071643 env[1663]: time="2024-12-13T02:25:42.071586794Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:42.072267 env[1663]: time="2024-12-13T02:25:42.072220290Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:42.073584 env[1663]: time="2024-12-13T02:25:42.073548341Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.10-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:42.074464 env[1663]: time="2024-12-13T02:25:42.074430644Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:42.074929 env[1663]: time="2024-12-13T02:25:42.074883430Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Dec 13 02:25:44.197359 systemd[1]: Stopped kubelet.service. Dec 13 02:25:44.198699 systemd[1]: Starting kubelet.service... Dec 13 02:25:44.210388 systemd[1]: Reloading. Dec 13 02:25:44.242328 /usr/lib/systemd/system-generators/torcx-generator[2269]: time="2024-12-13T02:25:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:25:44.242352 /usr/lib/systemd/system-generators/torcx-generator[2269]: time="2024-12-13T02:25:44Z" level=info msg="torcx already run" Dec 13 02:25:44.304764 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:25:44.304773 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:25:44.318014 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:25:44.370592 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Dec 13 02:25:44.370632 systemd[1]: kubelet.service: Failed with result 'signal'. Dec 13 02:25:44.370759 systemd[1]: Stopped kubelet.service. Dec 13 02:25:44.371642 systemd[1]: Starting kubelet.service... Dec 13 02:25:44.566489 systemd[1]: Started kubelet.service. Dec 13 02:25:44.590034 kubelet[2348]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:25:44.590034 kubelet[2348]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:25:44.590034 kubelet[2348]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:25:44.590269 kubelet[2348]: I1213 02:25:44.590030 2348 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:25:44.859744 kubelet[2348]: I1213 02:25:44.859703 2348 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:25:44.859744 kubelet[2348]: I1213 02:25:44.859716 2348 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:25:44.859844 kubelet[2348]: I1213 02:25:44.859838 2348 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:25:44.877040 kubelet[2348]: E1213 02:25:44.877033 2348 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.53:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.877999 kubelet[2348]: I1213 02:25:44.877975 2348 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:25:44.902458 kubelet[2348]: I1213 02:25:44.902439 2348 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:25:44.903423 kubelet[2348]: I1213 02:25:44.903375 2348 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:25:44.903559 kubelet[2348]: I1213 02:25:44.903519 2348 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:25:44.903559 kubelet[2348]: I1213 02:25:44.903537 2348 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:25:44.903559 kubelet[2348]: I1213 02:25:44.903545 2348 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:25:44.903713 kubelet[2348]: I1213 02:25:44.903623 2348 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:25:44.903713 kubelet[2348]: I1213 02:25:44.903684 2348 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:25:44.903713 kubelet[2348]: I1213 02:25:44.903694 2348 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:25:44.903779 kubelet[2348]: I1213 02:25:44.903715 2348 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:25:44.903779 kubelet[2348]: I1213 02:25:44.903724 2348 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:25:44.906687 kubelet[2348]: W1213 02:25:44.906628 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://139.178.70.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-cefcb26589&limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.906759 kubelet[2348]: E1213 02:25:44.906717 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.53:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.6-a-cefcb26589&limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.906759 kubelet[2348]: W1213 02:25:44.906714 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.906759 kubelet[2348]: E1213 02:25:44.906759 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.907000 kubelet[2348]: I1213 02:25:44.906982 2348 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:25:44.916378 kubelet[2348]: I1213 02:25:44.916362 2348 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:25:44.916446 kubelet[2348]: W1213 02:25:44.916408 2348 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 02:25:44.916838 kubelet[2348]: I1213 02:25:44.916823 2348 server.go:1256] "Started kubelet" Dec 13 02:25:44.916914 kubelet[2348]: I1213 02:25:44.916896 2348 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:25:44.916955 kubelet[2348]: I1213 02:25:44.916932 2348 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:25:44.917167 kubelet[2348]: I1213 02:25:44.917150 2348 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:25:44.917996 kubelet[2348]: I1213 02:25:44.917959 2348 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:25:44.921402 kubelet[2348]: E1213 02:25:44.921384 2348 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:25:44.925075 kubelet[2348]: E1213 02:25:44.925064 2348 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.53:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.53:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.6-a-cefcb26589.18109b6c0d22d2ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-cefcb26589,UID:ci-3510.3.6-a-cefcb26589,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-cefcb26589,},FirstTimestamp:2024-12-13 02:25:44.916800206 +0000 UTC m=+0.347428146,LastTimestamp:2024-12-13 02:25:44.916800206 +0000 UTC m=+0.347428146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-cefcb26589,}" Dec 13 02:25:44.927052 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Dec 13 02:25:44.927085 kubelet[2348]: I1213 02:25:44.927058 2348 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:25:44.927154 kubelet[2348]: I1213 02:25:44.927135 2348 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:25:44.927185 kubelet[2348]: I1213 02:25:44.927169 2348 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:25:44.927185 kubelet[2348]: E1213 02:25:44.927171 2348 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510.3.6-a-cefcb26589\" not found" Dec 13 02:25:44.927240 kubelet[2348]: I1213 02:25:44.927207 2348 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:25:44.927377 kubelet[2348]: W1213 02:25:44.927357 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://139.178.70.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.927431 kubelet[2348]: E1213 02:25:44.927385 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.53:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.927431 kubelet[2348]: I1213 02:25:44.927424 2348 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:25:44.927521 kubelet[2348]: I1213 02:25:44.927500 2348 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:25:44.927885 kubelet[2348]: E1213 02:25:44.927876 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-cefcb26589?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="200ms" Dec 13 02:25:44.927928 kubelet[2348]: I1213 02:25:44.927921 2348 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:25:44.935986 kubelet[2348]: I1213 02:25:44.935929 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:25:44.936639 kubelet[2348]: I1213 02:25:44.936616 2348 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:25:44.936691 kubelet[2348]: I1213 02:25:44.936647 2348 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:25:44.936691 kubelet[2348]: I1213 02:25:44.936657 2348 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:25:44.936775 kubelet[2348]: E1213 02:25:44.936719 2348 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:25:44.936951 kubelet[2348]: W1213 02:25:44.936938 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.936987 kubelet[2348]: E1213 02:25:44.936957 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:44.946987 kubelet[2348]: I1213 02:25:44.946977 2348 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:25:44.946987 kubelet[2348]: I1213 02:25:44.946986 2348 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:25:44.947054 kubelet[2348]: I1213 02:25:44.946996 2348 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:25:44.948004 kubelet[2348]: I1213 02:25:44.947995 2348 policy_none.go:49] "None policy: Start" Dec 13 02:25:44.948256 kubelet[2348]: I1213 02:25:44.948248 2348 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:25:44.948284 kubelet[2348]: I1213 02:25:44.948261 2348 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:25:44.950868 kubelet[2348]: I1213 02:25:44.950857 2348 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:25:44.950973 kubelet[2348]: I1213 02:25:44.950964 2348 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:25:44.951370 kubelet[2348]: E1213 02:25:44.951362 2348 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.6-a-cefcb26589\" not found" Dec 13 02:25:45.031649 kubelet[2348]: I1213 02:25:45.031590 2348 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.032391 kubelet[2348]: E1213 02:25:45.032340 2348 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.037671 kubelet[2348]: I1213 02:25:45.037577 2348 topology_manager.go:215] "Topology Admit Handler" podUID="3b4474806864df1ca52cecb38e41031c" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.041170 kubelet[2348]: I1213 02:25:45.041111 2348 topology_manager.go:215] "Topology Admit Handler" podUID="e71c6420da39f60c3c55e2b4bcaaa12f" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.044891 kubelet[2348]: I1213 02:25:45.044838 2348 topology_manager.go:215] "Topology Admit Handler" podUID="0de27d5b4bfdb5c84d47472397f0ff07" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.128651 kubelet[2348]: I1213 02:25:45.128392 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b4474806864df1ca52cecb38e41031c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" (UID: \"3b4474806864df1ca52cecb38e41031c\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.128651 kubelet[2348]: I1213 02:25:45.128618 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.128985 kubelet[2348]: E1213 02:25:45.128745 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-cefcb26589?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="400ms" Dec 13 02:25:45.128985 kubelet[2348]: I1213 02:25:45.128767 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.128985 kubelet[2348]: I1213 02:25:45.128875 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.128985 kubelet[2348]: I1213 02:25:45.128945 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.129567 kubelet[2348]: I1213 02:25:45.129001 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b4474806864df1ca52cecb38e41031c-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" (UID: \"3b4474806864df1ca52cecb38e41031c\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.129567 kubelet[2348]: I1213 02:25:45.129055 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b4474806864df1ca52cecb38e41031c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" (UID: \"3b4474806864df1ca52cecb38e41031c\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.129567 kubelet[2348]: I1213 02:25:45.129111 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.129567 kubelet[2348]: I1213 02:25:45.129191 2348 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0de27d5b4bfdb5c84d47472397f0ff07-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-cefcb26589\" (UID: \"0de27d5b4bfdb5c84d47472397f0ff07\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.236623 kubelet[2348]: I1213 02:25:45.236564 2348 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.237407 kubelet[2348]: E1213 02:25:45.237296 2348 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.352205 env[1663]: time="2024-12-13T02:25:45.352080974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-cefcb26589,Uid:3b4474806864df1ca52cecb38e41031c,Namespace:kube-system,Attempt:0,}" Dec 13 02:25:45.356220 env[1663]: time="2024-12-13T02:25:45.356103368Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-cefcb26589,Uid:e71c6420da39f60c3c55e2b4bcaaa12f,Namespace:kube-system,Attempt:0,}" Dec 13 02:25:45.359800 env[1663]: time="2024-12-13T02:25:45.359649776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-cefcb26589,Uid:0de27d5b4bfdb5c84d47472397f0ff07,Namespace:kube-system,Attempt:0,}" Dec 13 02:25:45.531785 kubelet[2348]: E1213 02:25:45.531703 2348 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.53:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.6-a-cefcb26589?timeout=10s\": dial tcp 139.178.70.53:6443: connect: connection refused" interval="800ms" Dec 13 02:25:45.642224 kubelet[2348]: I1213 02:25:45.642136 2348 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.643058 kubelet[2348]: E1213 02:25:45.642868 2348 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.53:6443/api/v1/nodes\": dial tcp 139.178.70.53:6443: connect: connection refused" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:45.743578 kubelet[2348]: W1213 02:25:45.743383 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://139.178.70.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:45.743841 kubelet[2348]: E1213 02:25:45.743592 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.53:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:45.748264 kubelet[2348]: W1213 02:25:45.748119 2348 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://139.178.70.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:45.748264 kubelet[2348]: E1213 02:25:45.748245 2348 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.53:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.53:6443: connect: connection refused Dec 13 02:25:45.865977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2943341584.mount: Deactivated successfully. Dec 13 02:25:45.867270 env[1663]: time="2024-12-13T02:25:45.867250382Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.886191 env[1663]: time="2024-12-13T02:25:45.886105923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.888614 env[1663]: time="2024-12-13T02:25:45.888534366Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.892919 env[1663]: time="2024-12-13T02:25:45.892846993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.899268 env[1663]: time="2024-12-13T02:25:45.899158650Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.906795 env[1663]: time="2024-12-13T02:25:45.906699500Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.917289 env[1663]: time="2024-12-13T02:25:45.917275727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.917795 env[1663]: time="2024-12-13T02:25:45.917750454Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.918182 env[1663]: time="2024-12-13T02:25:45.918172727Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.918630 env[1663]: time="2024-12-13T02:25:45.918590007Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.919072 env[1663]: time="2024-12-13T02:25:45.919031169Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.919446 env[1663]: time="2024-12-13T02:25:45.919412787Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:25:45.922844 env[1663]: time="2024-12-13T02:25:45.922809995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:25:45.922844 env[1663]: time="2024-12-13T02:25:45.922833422Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:25:45.922926 env[1663]: time="2024-12-13T02:25:45.922844834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:25:45.922954 env[1663]: time="2024-12-13T02:25:45.922926273Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f853dbb1bcecb21b84f829a9d380451308f905722da195b2de93af8823e23d8c pid=2399 runtime=io.containerd.runc.v2 Dec 13 02:25:45.924112 env[1663]: time="2024-12-13T02:25:45.924082368Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:25:45.924112 env[1663]: time="2024-12-13T02:25:45.924102373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:25:45.924112 env[1663]: time="2024-12-13T02:25:45.924109065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:25:45.924201 env[1663]: time="2024-12-13T02:25:45.924175872Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e2dc4d55aea7296350163e5d76ec9e58e6791c11385884137737acfeb598f2f8 pid=2417 runtime=io.containerd.runc.v2 Dec 13 02:25:45.924745 env[1663]: time="2024-12-13T02:25:45.924714516Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:25:45.924745 env[1663]: time="2024-12-13T02:25:45.924739169Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:25:45.924793 env[1663]: time="2024-12-13T02:25:45.924750341Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:25:45.924864 env[1663]: time="2024-12-13T02:25:45.924841496Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c5c6d95858391855fa3e195616d57ac8f0397709d13f3ee4a40e03a9180bb2de pid=2427 runtime=io.containerd.runc.v2 Dec 13 02:25:45.950391 env[1663]: time="2024-12-13T02:25:45.950366948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.6-a-cefcb26589,Uid:3b4474806864df1ca52cecb38e41031c,Namespace:kube-system,Attempt:0,} returns sandbox id \"f853dbb1bcecb21b84f829a9d380451308f905722da195b2de93af8823e23d8c\"" Dec 13 02:25:45.952181 env[1663]: time="2024-12-13T02:25:45.952167729Z" level=info msg="CreateContainer within sandbox \"f853dbb1bcecb21b84f829a9d380451308f905722da195b2de93af8823e23d8c\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Dec 13 02:25:45.956804 env[1663]: time="2024-12-13T02:25:45.956754739Z" level=info msg="CreateContainer within sandbox \"f853dbb1bcecb21b84f829a9d380451308f905722da195b2de93af8823e23d8c\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c9e1463e3e93bb9047b48f9bc91344d3b83af6d39fb2dd506824a032a2079639\"" Dec 13 02:25:45.957116 env[1663]: time="2024-12-13T02:25:45.957104883Z" level=info msg="StartContainer for \"c9e1463e3e93bb9047b48f9bc91344d3b83af6d39fb2dd506824a032a2079639\"" Dec 13 02:25:45.958162 env[1663]: time="2024-12-13T02:25:45.958145921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.6-a-cefcb26589,Uid:e71c6420da39f60c3c55e2b4bcaaa12f,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2dc4d55aea7296350163e5d76ec9e58e6791c11385884137737acfeb598f2f8\"" Dec 13 02:25:45.958268 env[1663]: time="2024-12-13T02:25:45.958250571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.6-a-cefcb26589,Uid:0de27d5b4bfdb5c84d47472397f0ff07,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5c6d95858391855fa3e195616d57ac8f0397709d13f3ee4a40e03a9180bb2de\"" Dec 13 02:25:45.959272 env[1663]: time="2024-12-13T02:25:45.959259886Z" level=info msg="CreateContainer within sandbox \"c5c6d95858391855fa3e195616d57ac8f0397709d13f3ee4a40e03a9180bb2de\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Dec 13 02:25:45.959272 env[1663]: time="2024-12-13T02:25:45.959260270Z" level=info msg="CreateContainer within sandbox \"e2dc4d55aea7296350163e5d76ec9e58e6791c11385884137737acfeb598f2f8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Dec 13 02:25:45.965382 env[1663]: time="2024-12-13T02:25:45.965355340Z" level=info msg="CreateContainer within sandbox \"c5c6d95858391855fa3e195616d57ac8f0397709d13f3ee4a40e03a9180bb2de\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2c4f7e14dc32582cdba3187ca154b5d55c5c19cbfa404c2d662704065317d950\"" Dec 13 02:25:45.965601 env[1663]: time="2024-12-13T02:25:45.965561348Z" level=info msg="StartContainer for \"2c4f7e14dc32582cdba3187ca154b5d55c5c19cbfa404c2d662704065317d950\"" Dec 13 02:25:45.965972 env[1663]: time="2024-12-13T02:25:45.965933841Z" level=info msg="CreateContainer within sandbox \"e2dc4d55aea7296350163e5d76ec9e58e6791c11385884137737acfeb598f2f8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c9f01cad02097003c4854947340fe5d91759d554407f78c0996bd29cb0e867e4\"" Dec 13 02:25:45.966112 env[1663]: time="2024-12-13T02:25:45.966076365Z" level=info msg="StartContainer for \"c9f01cad02097003c4854947340fe5d91759d554407f78c0996bd29cb0e867e4\"" Dec 13 02:25:45.990216 env[1663]: time="2024-12-13T02:25:45.990186607Z" level=info msg="StartContainer for \"c9e1463e3e93bb9047b48f9bc91344d3b83af6d39fb2dd506824a032a2079639\" returns successfully" Dec 13 02:25:45.997702 env[1663]: time="2024-12-13T02:25:45.997673937Z" level=info msg="StartContainer for \"2c4f7e14dc32582cdba3187ca154b5d55c5c19cbfa404c2d662704065317d950\" returns successfully" Dec 13 02:25:45.997831 env[1663]: time="2024-12-13T02:25:45.997674185Z" level=info msg="StartContainer for \"c9f01cad02097003c4854947340fe5d91759d554407f78c0996bd29cb0e867e4\" returns successfully" Dec 13 02:25:46.444559 kubelet[2348]: I1213 02:25:46.444497 2348 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:46.593198 kubelet[2348]: E1213 02:25:46.593155 2348 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.6-a-cefcb26589\" not found" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:46.595974 kubelet[2348]: I1213 02:25:46.595936 2348 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:46.644207 kubelet[2348]: E1213 02:25:46.644162 2348 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.6-a-cefcb26589.18109b6c0d22d2ce default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-cefcb26589,UID:ci-3510.3.6-a-cefcb26589,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-cefcb26589,},FirstTimestamp:2024-12-13 02:25:44.916800206 +0000 UTC m=+0.347428146,LastTimestamp:2024-12-13 02:25:44.916800206 +0000 UTC m=+0.347428146,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-cefcb26589,}" Dec 13 02:25:46.697769 kubelet[2348]: E1213 02:25:46.697681 2348 event.go:346] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-3510.3.6-a-cefcb26589.18109b6c0d68926a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.6-a-cefcb26589,UID:ci-3510.3.6-a-cefcb26589,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:ci-3510.3.6-a-cefcb26589,},FirstTimestamp:2024-12-13 02:25:44.921371242 +0000 UTC m=+0.351999186,LastTimestamp:2024-12-13 02:25:44.921371242 +0000 UTC m=+0.351999186,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.6-a-cefcb26589,}" Dec 13 02:25:46.904700 kubelet[2348]: I1213 02:25:46.904598 2348 apiserver.go:52] "Watching apiserver" Dec 13 02:25:46.927410 kubelet[2348]: I1213 02:25:46.927351 2348 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:25:46.953600 kubelet[2348]: E1213 02:25:46.953440 2348 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:46.953600 kubelet[2348]: E1213 02:25:46.953484 2348 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:46.953939 kubelet[2348]: E1213 02:25:46.953676 2348 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.6-a-cefcb26589\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:47.958336 kubelet[2348]: W1213 02:25:47.958247 2348 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:25:48.218555 kubelet[2348]: W1213 02:25:48.218391 2348 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:25:49.683751 systemd[1]: Reloading. Dec 13 02:25:49.731715 /usr/lib/systemd/system-generators/torcx-generator[2685]: time="2024-12-13T02:25:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.6 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.6 /var/lib/torcx/store]" Dec 13 02:25:49.731736 /usr/lib/systemd/system-generators/torcx-generator[2685]: time="2024-12-13T02:25:49Z" level=info msg="torcx already run" Dec 13 02:25:49.790956 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Dec 13 02:25:49.790963 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Dec 13 02:25:49.803804 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 02:25:49.858997 systemd[1]: Stopping kubelet.service... Dec 13 02:25:49.875698 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 02:25:49.875839 systemd[1]: Stopped kubelet.service. Dec 13 02:25:49.876752 systemd[1]: Starting kubelet.service... Dec 13 02:25:50.028532 systemd[1]: Started kubelet.service. Dec 13 02:25:50.060300 kubelet[2760]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:25:50.060300 kubelet[2760]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 02:25:50.060300 kubelet[2760]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 02:25:50.060599 kubelet[2760]: I1213 02:25:50.060305 2760 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 02:25:50.064769 kubelet[2760]: I1213 02:25:50.064726 2760 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 02:25:50.064769 kubelet[2760]: I1213 02:25:50.064745 2760 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 02:25:50.064943 kubelet[2760]: I1213 02:25:50.064909 2760 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 02:25:50.066239 kubelet[2760]: I1213 02:25:50.066227 2760 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Dec 13 02:25:50.067742 kubelet[2760]: I1213 02:25:50.067718 2760 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 02:25:50.088087 sudo[2785]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Dec 13 02:25:50.088278 sudo[2785]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Dec 13 02:25:50.089544 kubelet[2760]: I1213 02:25:50.089504 2760 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 02:25:50.090004 kubelet[2760]: I1213 02:25:50.089969 2760 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 02:25:50.090178 kubelet[2760]: I1213 02:25:50.090148 2760 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 02:25:50.090178 kubelet[2760]: I1213 02:25:50.090167 2760 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 02:25:50.090178 kubelet[2760]: I1213 02:25:50.090176 2760 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 02:25:50.090387 kubelet[2760]: I1213 02:25:50.090199 2760 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:25:50.090387 kubelet[2760]: I1213 02:25:50.090267 2760 kubelet.go:396] "Attempting to sync node with API server" Dec 13 02:25:50.090387 kubelet[2760]: I1213 02:25:50.090279 2760 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 02:25:50.090387 kubelet[2760]: I1213 02:25:50.090297 2760 kubelet.go:312] "Adding apiserver pod source" Dec 13 02:25:50.090387 kubelet[2760]: I1213 02:25:50.090309 2760 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 02:25:50.091283 kubelet[2760]: I1213 02:25:50.091236 2760 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Dec 13 02:25:50.091572 kubelet[2760]: I1213 02:25:50.091554 2760 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 02:25:50.092694 kubelet[2760]: I1213 02:25:50.092674 2760 server.go:1256] "Started kubelet" Dec 13 02:25:50.092809 kubelet[2760]: I1213 02:25:50.092757 2760 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 02:25:50.092809 kubelet[2760]: I1213 02:25:50.092763 2760 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 02:25:50.092966 kubelet[2760]: I1213 02:25:50.092949 2760 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 02:25:50.094228 kubelet[2760]: I1213 02:25:50.094208 2760 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 02:25:50.094326 kubelet[2760]: I1213 02:25:50.094236 2760 server.go:461] "Adding debug handlers to kubelet server" Dec 13 02:25:50.094393 kubelet[2760]: I1213 02:25:50.094351 2760 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 02:25:50.094447 kubelet[2760]: I1213 02:25:50.094434 2760 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 02:25:50.094522 kubelet[2760]: E1213 02:25:50.094268 2760 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 02:25:50.094573 kubelet[2760]: I1213 02:25:50.094561 2760 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 02:25:50.095138 kubelet[2760]: I1213 02:25:50.095125 2760 factory.go:221] Registration of the systemd container factory successfully Dec 13 02:25:50.095290 kubelet[2760]: I1213 02:25:50.095251 2760 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 02:25:50.096590 kubelet[2760]: I1213 02:25:50.096576 2760 factory.go:221] Registration of the containerd container factory successfully Dec 13 02:25:50.109037 kubelet[2760]: I1213 02:25:50.108137 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 02:25:50.109165 kubelet[2760]: I1213 02:25:50.109153 2760 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 02:25:50.109208 kubelet[2760]: I1213 02:25:50.109175 2760 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 02:25:50.109208 kubelet[2760]: I1213 02:25:50.109190 2760 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 02:25:50.109263 kubelet[2760]: E1213 02:25:50.109230 2760 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 02:25:50.134791 kubelet[2760]: I1213 02:25:50.134768 2760 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 02:25:50.134791 kubelet[2760]: I1213 02:25:50.134796 2760 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 02:25:50.135000 kubelet[2760]: I1213 02:25:50.134811 2760 state_mem.go:36] "Initialized new in-memory state store" Dec 13 02:25:50.135000 kubelet[2760]: I1213 02:25:50.134982 2760 state_mem.go:88] "Updated default CPUSet" cpuSet="" Dec 13 02:25:50.135053 kubelet[2760]: I1213 02:25:50.135006 2760 state_mem.go:96] "Updated CPUSet assignments" assignments={} Dec 13 02:25:50.135053 kubelet[2760]: I1213 02:25:50.135014 2760 policy_none.go:49] "None policy: Start" Dec 13 02:25:50.135636 kubelet[2760]: I1213 02:25:50.135624 2760 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 02:25:50.135670 kubelet[2760]: I1213 02:25:50.135640 2760 state_mem.go:35] "Initializing new in-memory state store" Dec 13 02:25:50.135790 kubelet[2760]: I1213 02:25:50.135783 2760 state_mem.go:75] "Updated machine memory state" Dec 13 02:25:50.136714 kubelet[2760]: I1213 02:25:50.136678 2760 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 02:25:50.136880 kubelet[2760]: I1213 02:25:50.136825 2760 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 02:25:50.195983 kubelet[2760]: I1213 02:25:50.195956 2760 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.201471 kubelet[2760]: I1213 02:25:50.201458 2760 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.201517 kubelet[2760]: I1213 02:25:50.201503 2760 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.209568 kubelet[2760]: I1213 02:25:50.209547 2760 topology_manager.go:215] "Topology Admit Handler" podUID="3b4474806864df1ca52cecb38e41031c" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.209618 kubelet[2760]: I1213 02:25:50.209603 2760 topology_manager.go:215] "Topology Admit Handler" podUID="e71c6420da39f60c3c55e2b4bcaaa12f" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.209641 kubelet[2760]: I1213 02:25:50.209623 2760 topology_manager.go:215] "Topology Admit Handler" podUID="0de27d5b4bfdb5c84d47472397f0ff07" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.213787 kubelet[2760]: W1213 02:25:50.213777 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:25:50.213831 kubelet[2760]: W1213 02:25:50.213803 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:25:50.213901 kubelet[2760]: E1213 02:25:50.213846 2760 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.214262 kubelet[2760]: W1213 02:25:50.214256 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:25:50.214289 kubelet[2760]: E1213 02:25:50.214278 2760 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396630 kubelet[2760]: I1213 02:25:50.396587 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396630 kubelet[2760]: I1213 02:25:50.396620 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396747 kubelet[2760]: I1213 02:25:50.396644 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3b4474806864df1ca52cecb38e41031c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" (UID: \"3b4474806864df1ca52cecb38e41031c\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396747 kubelet[2760]: I1213 02:25:50.396665 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3b4474806864df1ca52cecb38e41031c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" (UID: \"3b4474806864df1ca52cecb38e41031c\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396747 kubelet[2760]: I1213 02:25:50.396715 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-ca-certs\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396806 kubelet[2760]: I1213 02:25:50.396763 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396806 kubelet[2760]: I1213 02:25:50.396787 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e71c6420da39f60c3c55e2b4bcaaa12f-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.6-a-cefcb26589\" (UID: \"e71c6420da39f60c3c55e2b4bcaaa12f\") " pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396843 kubelet[2760]: I1213 02:25:50.396807 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0de27d5b4bfdb5c84d47472397f0ff07-kubeconfig\") pod \"kube-scheduler-ci-3510.3.6-a-cefcb26589\" (UID: \"0de27d5b4bfdb5c84d47472397f0ff07\") " pod="kube-system/kube-scheduler-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.396843 kubelet[2760]: I1213 02:25:50.396838 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3b4474806864df1ca52cecb38e41031c-ca-certs\") pod \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" (UID: \"3b4474806864df1ca52cecb38e41031c\") " pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:50.437460 sudo[2785]: pam_unix(sudo:session): session closed for user root Dec 13 02:25:51.091211 kubelet[2760]: I1213 02:25:51.091086 2760 apiserver.go:52] "Watching apiserver" Dec 13 02:25:51.124834 kubelet[2760]: W1213 02:25:51.124757 2760 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Dec 13 02:25:51.125137 kubelet[2760]: E1213 02:25:51.125012 2760 kubelet.go:1921] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.6-a-cefcb26589\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" Dec 13 02:25:51.174274 kubelet[2760]: I1213 02:25:51.174252 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.6-a-cefcb26589" podStartSLOduration=1.17422051 podStartE2EDuration="1.17422051s" podCreationTimestamp="2024-12-13 02:25:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:25:51.174170927 +0000 UTC m=+1.142748193" watchObservedRunningTime="2024-12-13 02:25:51.17422051 +0000 UTC m=+1.142797771" Dec 13 02:25:51.179872 kubelet[2760]: I1213 02:25:51.179857 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.6-a-cefcb26589" podStartSLOduration=4.179831321 podStartE2EDuration="4.179831321s" podCreationTimestamp="2024-12-13 02:25:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:25:51.179818663 +0000 UTC m=+1.148395933" watchObservedRunningTime="2024-12-13 02:25:51.179831321 +0000 UTC m=+1.148408591" Dec 13 02:25:51.185402 kubelet[2760]: I1213 02:25:51.185391 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.6-a-cefcb26589" podStartSLOduration=3.185374238 podStartE2EDuration="3.185374238s" podCreationTimestamp="2024-12-13 02:25:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:25:51.18534077 +0000 UTC m=+1.153918031" watchObservedRunningTime="2024-12-13 02:25:51.185374238 +0000 UTC m=+1.153951495" Dec 13 02:25:51.195436 kubelet[2760]: I1213 02:25:51.195396 2760 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 02:25:51.692862 sudo[1883]: pam_unix(sudo:session): session closed for user root Dec 13 02:25:51.693808 sshd[1877]: pam_unix(sshd:session): session closed for user core Dec 13 02:25:51.695330 systemd[1]: sshd@6-139.178.70.53:22-139.178.68.195:50038.service: Deactivated successfully. Dec 13 02:25:51.696069 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 02:25:51.696087 systemd-logind[1706]: Session 9 logged out. Waiting for processes to exit. Dec 13 02:25:51.696713 systemd-logind[1706]: Removed session 9. Dec 13 02:25:57.261640 update_engine[1654]: I1213 02:25:57.261518 1654 update_attempter.cc:509] Updating boot flags... Dec 13 02:26:04.015578 kubelet[2760]: I1213 02:26:04.015544 2760 topology_manager.go:215] "Topology Admit Handler" podUID="2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3" podNamespace="kube-system" podName="kube-proxy-6mmpd" Dec 13 02:26:04.021010 kubelet[2760]: I1213 02:26:04.020986 2760 topology_manager.go:215] "Topology Admit Handler" podUID="45cf36e0-a940-4238-8cb7-0698c781ab88" podNamespace="kube-system" podName="cilium-mdm9t" Dec 13 02:26:04.021236 kubelet[2760]: I1213 02:26:04.021201 2760 topology_manager.go:215] "Topology Admit Handler" podUID="fd9fd862-f7b3-4923-9575-d8d9fa2b991e" podNamespace="kube-system" podName="cilium-operator-5cc964979-kprzt" Dec 13 02:26:04.083445 kubelet[2760]: I1213 02:26:04.083316 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3-kube-proxy\") pod \"kube-proxy-6mmpd\" (UID: \"2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3\") " pod="kube-system/kube-proxy-6mmpd" Dec 13 02:26:04.083736 kubelet[2760]: I1213 02:26:04.083614 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3-xtables-lock\") pod \"kube-proxy-6mmpd\" (UID: \"2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3\") " pod="kube-system/kube-proxy-6mmpd" Dec 13 02:26:04.083883 kubelet[2760]: I1213 02:26:04.083738 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-hostproc\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.083999 kubelet[2760]: I1213 02:26:04.083942 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45cf36e0-a940-4238-8cb7-0698c781ab88-clustermesh-secrets\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.084179 kubelet[2760]: I1213 02:26:04.084128 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-host-proc-sys-net\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.084453 kubelet[2760]: I1213 02:26:04.084271 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rf5hc\" (UniqueName: \"kubernetes.io/projected/fd9fd862-f7b3-4923-9575-d8d9fa2b991e-kube-api-access-rf5hc\") pod \"cilium-operator-5cc964979-kprzt\" (UID: \"fd9fd862-f7b3-4923-9575-d8d9fa2b991e\") " pod="kube-system/cilium-operator-5cc964979-kprzt" Dec 13 02:26:04.084453 kubelet[2760]: I1213 02:26:04.084379 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-xtables-lock\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.084756 kubelet[2760]: I1213 02:26:04.084487 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-config-path\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.084756 kubelet[2760]: I1213 02:26:04.084609 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-run\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.084756 kubelet[2760]: I1213 02:26:04.084633 2760 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Dec 13 02:26:04.084756 kubelet[2760]: I1213 02:26:04.084701 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-cgroup\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.085190 kubelet[2760]: I1213 02:26:04.084779 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cni-path\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.085190 kubelet[2760]: I1213 02:26:04.084949 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hccqp\" (UniqueName: \"kubernetes.io/projected/45cf36e0-a940-4238-8cb7-0698c781ab88-kube-api-access-hccqp\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.085190 kubelet[2760]: I1213 02:26:04.085095 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3-lib-modules\") pod \"kube-proxy-6mmpd\" (UID: \"2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3\") " pod="kube-system/kube-proxy-6mmpd" Dec 13 02:26:04.085558 kubelet[2760]: I1213 02:26:04.085232 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frcvm\" (UniqueName: \"kubernetes.io/projected/2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3-kube-api-access-frcvm\") pod \"kube-proxy-6mmpd\" (UID: \"2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3\") " pod="kube-system/kube-proxy-6mmpd" Dec 13 02:26:04.085558 kubelet[2760]: I1213 02:26:04.085308 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd9fd862-f7b3-4923-9575-d8d9fa2b991e-cilium-config-path\") pod \"cilium-operator-5cc964979-kprzt\" (UID: \"fd9fd862-f7b3-4923-9575-d8d9fa2b991e\") " pod="kube-system/cilium-operator-5cc964979-kprzt" Dec 13 02:26:04.085558 kubelet[2760]: I1213 02:26:04.085392 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-bpf-maps\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.085558 kubelet[2760]: I1213 02:26:04.085479 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-host-proc-sys-kernel\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.085998 env[1663]: time="2024-12-13T02:26:04.085278289Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 02:26:04.086774 kubelet[2760]: I1213 02:26:04.085596 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-etc-cni-netd\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.086774 kubelet[2760]: I1213 02:26:04.085690 2760 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Dec 13 02:26:04.086774 kubelet[2760]: I1213 02:26:04.085725 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-lib-modules\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.086774 kubelet[2760]: I1213 02:26:04.085812 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45cf36e0-a940-4238-8cb7-0698c781ab88-hubble-tls\") pod \"cilium-mdm9t\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " pod="kube-system/cilium-mdm9t" Dec 13 02:26:04.319230 env[1663]: time="2024-12-13T02:26:04.319005609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6mmpd,Uid:2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:04.325703 env[1663]: time="2024-12-13T02:26:04.325601398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kprzt,Uid:fd9fd862-f7b3-4923-9575-d8d9fa2b991e,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:04.326000 env[1663]: time="2024-12-13T02:26:04.325648373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mdm9t,Uid:45cf36e0-a940-4238-8cb7-0698c781ab88,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:04.347450 env[1663]: time="2024-12-13T02:26:04.347229619Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:04.347450 env[1663]: time="2024-12-13T02:26:04.347334124Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:04.347450 env[1663]: time="2024-12-13T02:26:04.347396252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:04.348138 env[1663]: time="2024-12-13T02:26:04.347950076Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9bf3c58f30e37b7d79996e9ef32ee6de2cd515eb42b30f6a7ba11e5b2fc151de pid=2932 runtime=io.containerd.runc.v2 Dec 13 02:26:04.353620 env[1663]: time="2024-12-13T02:26:04.353447506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:04.353620 env[1663]: time="2024-12-13T02:26:04.353559248Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:04.353620 env[1663]: time="2024-12-13T02:26:04.353597866Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:04.354259 env[1663]: time="2024-12-13T02:26:04.354111499Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:04.354259 env[1663]: time="2024-12-13T02:26:04.354214430Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:04.354654 env[1663]: time="2024-12-13T02:26:04.354254607Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:04.354654 env[1663]: time="2024-12-13T02:26:04.354360590Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376 pid=2953 runtime=io.containerd.runc.v2 Dec 13 02:26:04.354985 env[1663]: time="2024-12-13T02:26:04.354661863Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52 pid=2954 runtime=io.containerd.runc.v2 Dec 13 02:26:04.387835 env[1663]: time="2024-12-13T02:26:04.387804148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6mmpd,Uid:2aeec1c1-a940-4c2a-9fe9-cbd2b401aed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bf3c58f30e37b7d79996e9ef32ee6de2cd515eb42b30f6a7ba11e5b2fc151de\"" Dec 13 02:26:04.389702 env[1663]: time="2024-12-13T02:26:04.389674349Z" level=info msg="CreateContainer within sandbox \"9bf3c58f30e37b7d79996e9ef32ee6de2cd515eb42b30f6a7ba11e5b2fc151de\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 02:26:04.389995 env[1663]: time="2024-12-13T02:26:04.389971438Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mdm9t,Uid:45cf36e0-a940-4238-8cb7-0698c781ab88,Namespace:kube-system,Attempt:0,} returns sandbox id \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\"" Dec 13 02:26:04.390775 env[1663]: time="2024-12-13T02:26:04.390758560Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Dec 13 02:26:04.396069 env[1663]: time="2024-12-13T02:26:04.396019928Z" level=info msg="CreateContainer within sandbox \"9bf3c58f30e37b7d79996e9ef32ee6de2cd515eb42b30f6a7ba11e5b2fc151de\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3f597a1a326451b622de7f9a42df8571d3613f9a66e698c31a85ea65dc9568be\"" Dec 13 02:26:04.396309 env[1663]: time="2024-12-13T02:26:04.396290073Z" level=info msg="StartContainer for \"3f597a1a326451b622de7f9a42df8571d3613f9a66e698c31a85ea65dc9568be\"" Dec 13 02:26:04.404793 env[1663]: time="2024-12-13T02:26:04.404768317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-5cc964979-kprzt,Uid:fd9fd862-f7b3-4923-9575-d8d9fa2b991e,Namespace:kube-system,Attempt:0,} returns sandbox id \"98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376\"" Dec 13 02:26:04.423297 env[1663]: time="2024-12-13T02:26:04.423269585Z" level=info msg="StartContainer for \"3f597a1a326451b622de7f9a42df8571d3613f9a66e698c31a85ea65dc9568be\" returns successfully" Dec 13 02:26:05.177844 kubelet[2760]: I1213 02:26:05.177772 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6mmpd" podStartSLOduration=1.177680405 podStartE2EDuration="1.177680405s" podCreationTimestamp="2024-12-13 02:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:26:05.177262414 +0000 UTC m=+15.145839741" watchObservedRunningTime="2024-12-13 02:26:05.177680405 +0000 UTC m=+15.146257716" Dec 13 02:26:10.201860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount830459866.mount: Deactivated successfully. Dec 13 02:26:11.912602 env[1663]: time="2024-12-13T02:26:11.912545174Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:26:11.913264 env[1663]: time="2024-12-13T02:26:11.913200598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:26:11.914084 env[1663]: time="2024-12-13T02:26:11.914038376Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:26:11.914920 env[1663]: time="2024-12-13T02:26:11.914876683Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Dec 13 02:26:11.915288 env[1663]: time="2024-12-13T02:26:11.915245741Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Dec 13 02:26:11.916896 env[1663]: time="2024-12-13T02:26:11.916851459Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:26:11.920987 env[1663]: time="2024-12-13T02:26:11.920948271Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\"" Dec 13 02:26:11.921242 env[1663]: time="2024-12-13T02:26:11.921229621Z" level=info msg="StartContainer for \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\"" Dec 13 02:26:11.922701 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1238681030.mount: Deactivated successfully. Dec 13 02:26:11.942116 env[1663]: time="2024-12-13T02:26:11.942088811Z" level=info msg="StartContainer for \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\" returns successfully" Dec 13 02:26:12.926095 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e-rootfs.mount: Deactivated successfully. Dec 13 02:26:13.746136 env[1663]: time="2024-12-13T02:26:13.746042140Z" level=info msg="shim disconnected" id=2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e Dec 13 02:26:13.747074 env[1663]: time="2024-12-13T02:26:13.746139180Z" level=warning msg="cleaning up after shim disconnected" id=2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e namespace=k8s.io Dec 13 02:26:13.747074 env[1663]: time="2024-12-13T02:26:13.746171798Z" level=info msg="cleaning up dead shim" Dec 13 02:26:13.761900 env[1663]: time="2024-12-13T02:26:13.761759969Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:26:13Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3259 runtime=io.containerd.runc.v2\n" Dec 13 02:26:14.185545 env[1663]: time="2024-12-13T02:26:14.185431389Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:26:14.196372 env[1663]: time="2024-12-13T02:26:14.196348292Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\"" Dec 13 02:26:14.196677 env[1663]: time="2024-12-13T02:26:14.196664197Z" level=info msg="StartContainer for \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\"" Dec 13 02:26:14.217291 env[1663]: time="2024-12-13T02:26:14.217262439Z" level=info msg="StartContainer for \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\" returns successfully" Dec 13 02:26:14.224779 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 02:26:14.225038 systemd[1]: Stopped systemd-sysctl.service. Dec 13 02:26:14.225183 systemd[1]: Stopping systemd-sysctl.service... Dec 13 02:26:14.226604 systemd[1]: Starting systemd-sysctl.service... Dec 13 02:26:14.232641 systemd[1]: Finished systemd-sysctl.service. Dec 13 02:26:14.238389 env[1663]: time="2024-12-13T02:26:14.238363456Z" level=info msg="shim disconnected" id=32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7 Dec 13 02:26:14.238522 env[1663]: time="2024-12-13T02:26:14.238389480Z" level=warning msg="cleaning up after shim disconnected" id=32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7 namespace=k8s.io Dec 13 02:26:14.238522 env[1663]: time="2024-12-13T02:26:14.238395283Z" level=info msg="cleaning up dead shim" Dec 13 02:26:14.242370 env[1663]: time="2024-12-13T02:26:14.242326960Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:26:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3323 runtime=io.containerd.runc.v2\n" Dec 13 02:26:15.192529 env[1663]: time="2024-12-13T02:26:15.192363217Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:26:15.199067 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7-rootfs.mount: Deactivated successfully. Dec 13 02:26:15.202817 env[1663]: time="2024-12-13T02:26:15.202771537Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\"" Dec 13 02:26:15.203198 env[1663]: time="2024-12-13T02:26:15.203146644Z" level=info msg="StartContainer for \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\"" Dec 13 02:26:15.225339 env[1663]: time="2024-12-13T02:26:15.225287157Z" level=info msg="StartContainer for \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\" returns successfully" Dec 13 02:26:15.235725 env[1663]: time="2024-12-13T02:26:15.235667588Z" level=info msg="shim disconnected" id=a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b Dec 13 02:26:15.235725 env[1663]: time="2024-12-13T02:26:15.235694937Z" level=warning msg="cleaning up after shim disconnected" id=a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b namespace=k8s.io Dec 13 02:26:15.235725 env[1663]: time="2024-12-13T02:26:15.235701158Z" level=info msg="cleaning up dead shim" Dec 13 02:26:15.238974 env[1663]: time="2024-12-13T02:26:15.238938709Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:26:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3378 runtime=io.containerd.runc.v2\n" Dec 13 02:26:16.170266 env[1663]: time="2024-12-13T02:26:16.170215854Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:26:16.170863 env[1663]: time="2024-12-13T02:26:16.170814127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:26:16.171464 env[1663]: time="2024-12-13T02:26:16.171425947Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Dec 13 02:26:16.171826 env[1663]: time="2024-12-13T02:26:16.171762284Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Dec 13 02:26:16.173092 env[1663]: time="2024-12-13T02:26:16.173046622Z" level=info msg="CreateContainer within sandbox \"98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Dec 13 02:26:16.176882 env[1663]: time="2024-12-13T02:26:16.176838946Z" level=info msg="CreateContainer within sandbox \"98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\"" Dec 13 02:26:16.177220 env[1663]: time="2024-12-13T02:26:16.177188403Z" level=info msg="StartContainer for \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\"" Dec 13 02:26:16.190467 env[1663]: time="2024-12-13T02:26:16.190414538Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:26:16.195406 env[1663]: time="2024-12-13T02:26:16.195382130Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\"" Dec 13 02:26:16.195678 env[1663]: time="2024-12-13T02:26:16.195659211Z" level=info msg="StartContainer for \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\"" Dec 13 02:26:16.196451 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b-rootfs.mount: Deactivated successfully. Dec 13 02:26:16.198899 env[1663]: time="2024-12-13T02:26:16.198866368Z" level=info msg="StartContainer for \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\" returns successfully" Dec 13 02:26:16.218835 env[1663]: time="2024-12-13T02:26:16.218805147Z" level=info msg="StartContainer for \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\" returns successfully" Dec 13 02:26:16.227628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c-rootfs.mount: Deactivated successfully. Dec 13 02:26:16.377099 env[1663]: time="2024-12-13T02:26:16.376926864Z" level=info msg="shim disconnected" id=e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c Dec 13 02:26:16.377099 env[1663]: time="2024-12-13T02:26:16.377057968Z" level=warning msg="cleaning up after shim disconnected" id=e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c namespace=k8s.io Dec 13 02:26:16.377099 env[1663]: time="2024-12-13T02:26:16.377091528Z" level=info msg="cleaning up dead shim" Dec 13 02:26:16.394073 env[1663]: time="2024-12-13T02:26:16.393959098Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:26:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3481 runtime=io.containerd.runc.v2\n" Dec 13 02:26:17.204990 env[1663]: time="2024-12-13T02:26:17.204880018Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:26:17.220932 env[1663]: time="2024-12-13T02:26:17.220882569Z" level=info msg="CreateContainer within sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\"" Dec 13 02:26:17.221217 env[1663]: time="2024-12-13T02:26:17.221199218Z" level=info msg="StartContainer for \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\"" Dec 13 02:26:17.231133 kubelet[2760]: I1213 02:26:17.231090 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-5cc964979-kprzt" podStartSLOduration=1.464329046 podStartE2EDuration="13.231054687s" podCreationTimestamp="2024-12-13 02:26:04 +0000 UTC" firstStartedPulling="2024-12-13 02:26:04.405229236 +0000 UTC m=+14.373806492" lastFinishedPulling="2024-12-13 02:26:16.171954875 +0000 UTC m=+26.140532133" observedRunningTime="2024-12-13 02:26:17.230944834 +0000 UTC m=+27.199522094" watchObservedRunningTime="2024-12-13 02:26:17.231054687 +0000 UTC m=+27.199631943" Dec 13 02:26:17.244031 env[1663]: time="2024-12-13T02:26:17.244007634Z" level=info msg="StartContainer for \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\" returns successfully" Dec 13 02:26:17.297487 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 02:26:17.302666 kubelet[2760]: I1213 02:26:17.302655 2760 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 02:26:17.334142 kubelet[2760]: I1213 02:26:17.334126 2760 topology_manager.go:215] "Topology Admit Handler" podUID="7589da60-67d7-4541-966e-87e06ffbeabb" podNamespace="kube-system" podName="coredns-76f75df574-xlpr4" Dec 13 02:26:17.335050 kubelet[2760]: I1213 02:26:17.335037 2760 topology_manager.go:215] "Topology Admit Handler" podUID="6c8de262-d9eb-4d93-893d-2b2f42f98ab1" podNamespace="kube-system" podName="coredns-76f75df574-jr269" Dec 13 02:26:17.454452 kernel: Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks! Dec 13 02:26:17.477778 kubelet[2760]: I1213 02:26:17.477737 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7589da60-67d7-4541-966e-87e06ffbeabb-config-volume\") pod \"coredns-76f75df574-xlpr4\" (UID: \"7589da60-67d7-4541-966e-87e06ffbeabb\") " pod="kube-system/coredns-76f75df574-xlpr4" Dec 13 02:26:17.477778 kubelet[2760]: I1213 02:26:17.477762 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m76lw\" (UniqueName: \"kubernetes.io/projected/7589da60-67d7-4541-966e-87e06ffbeabb-kube-api-access-m76lw\") pod \"coredns-76f75df574-xlpr4\" (UID: \"7589da60-67d7-4541-966e-87e06ffbeabb\") " pod="kube-system/coredns-76f75df574-xlpr4" Dec 13 02:26:17.477778 kubelet[2760]: I1213 02:26:17.477776 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6c8de262-d9eb-4d93-893d-2b2f42f98ab1-config-volume\") pod \"coredns-76f75df574-jr269\" (UID: \"6c8de262-d9eb-4d93-893d-2b2f42f98ab1\") " pod="kube-system/coredns-76f75df574-jr269" Dec 13 02:26:17.477905 kubelet[2760]: I1213 02:26:17.477788 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-swcpc\" (UniqueName: \"kubernetes.io/projected/6c8de262-d9eb-4d93-893d-2b2f42f98ab1-kube-api-access-swcpc\") pod \"coredns-76f75df574-jr269\" (UID: \"6c8de262-d9eb-4d93-893d-2b2f42f98ab1\") " pod="kube-system/coredns-76f75df574-jr269" Dec 13 02:26:17.637931 env[1663]: time="2024-12-13T02:26:17.637838227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jr269,Uid:6c8de262-d9eb-4d93-893d-2b2f42f98ab1,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:17.637931 env[1663]: time="2024-12-13T02:26:17.637849608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xlpr4,Uid:7589da60-67d7-4541-966e-87e06ffbeabb,Namespace:kube-system,Attempt:0,}" Dec 13 02:26:18.220157 kubelet[2760]: I1213 02:26:18.220138 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mdm9t" podStartSLOduration=6.695496421 podStartE2EDuration="14.220108411s" podCreationTimestamp="2024-12-13 02:26:04 +0000 UTC" firstStartedPulling="2024-12-13 02:26:04.390511293 +0000 UTC m=+14.359088563" lastFinishedPulling="2024-12-13 02:26:11.915123295 +0000 UTC m=+21.883700553" observedRunningTime="2024-12-13 02:26:18.220036927 +0000 UTC m=+28.188614190" watchObservedRunningTime="2024-12-13 02:26:18.220108411 +0000 UTC m=+28.188685671" Dec 13 02:26:19.857239 systemd-networkd[1405]: cilium_host: Link UP Dec 13 02:26:19.857327 systemd-networkd[1405]: cilium_net: Link UP Dec 13 02:26:19.864508 systemd-networkd[1405]: cilium_net: Gained carrier Dec 13 02:26:19.871674 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Dec 13 02:26:19.871750 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Dec 13 02:26:19.871761 systemd-networkd[1405]: cilium_host: Gained carrier Dec 13 02:26:19.918286 systemd-networkd[1405]: cilium_vxlan: Link UP Dec 13 02:26:19.918289 systemd-networkd[1405]: cilium_vxlan: Gained carrier Dec 13 02:26:20.050432 kernel: NET: Registered PF_ALG protocol family Dec 13 02:26:20.107571 systemd-networkd[1405]: cilium_net: Gained IPv6LL Dec 13 02:26:20.521679 systemd-networkd[1405]: lxc_health: Link UP Dec 13 02:26:20.548389 systemd-networkd[1405]: cilium_host: Gained IPv6LL Dec 13 02:26:20.548515 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:26:20.548568 systemd-networkd[1405]: lxc_health: Gained carrier Dec 13 02:26:20.704503 kernel: eth0: renamed from tmpd63f6 Dec 13 02:26:20.732945 systemd-networkd[1405]: lxcba387f0c54c3: Link UP Dec 13 02:26:20.741351 systemd-networkd[1405]: lxcba387f0c54c3: Gained carrier Dec 13 02:26:20.741440 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcba387f0c54c3: link becomes ready Dec 13 02:26:20.741453 systemd-networkd[1405]: lxc011414aab500: Link UP Dec 13 02:26:20.758495 kernel: eth0: renamed from tmpb5e51 Dec 13 02:26:20.784447 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc011414aab500: link becomes ready Dec 13 02:26:20.784531 systemd-networkd[1405]: lxc011414aab500: Gained carrier Dec 13 02:26:21.555559 systemd-networkd[1405]: cilium_vxlan: Gained IPv6LL Dec 13 02:26:21.939578 systemd-networkd[1405]: lxcba387f0c54c3: Gained IPv6LL Dec 13 02:26:22.003536 systemd-networkd[1405]: lxc011414aab500: Gained IPv6LL Dec 13 02:26:22.259568 systemd-networkd[1405]: lxc_health: Gained IPv6LL Dec 13 02:26:23.035798 env[1663]: time="2024-12-13T02:26:23.035724859Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:23.035798 env[1663]: time="2024-12-13T02:26:23.035749506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:23.035798 env[1663]: time="2024-12-13T02:26:23.035758065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:23.036118 env[1663]: time="2024-12-13T02:26:23.035903756Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b5e51f37dc4425a6851976107f3660f14c193d9b0fc1ddacfd72bd06942bb88d pid=4161 runtime=io.containerd.runc.v2 Dec 13 02:26:23.036118 env[1663]: time="2024-12-13T02:26:23.036074006Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:26:23.036118 env[1663]: time="2024-12-13T02:26:23.036095832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:26:23.036118 env[1663]: time="2024-12-13T02:26:23.036104711Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:26:23.036219 env[1663]: time="2024-12-13T02:26:23.036184546Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d63f6e740f6332d65b843a33dd82ca7db6be155e8c44fd8de05b91848360d377 pid=4162 runtime=io.containerd.runc.v2 Dec 13 02:26:23.064812 env[1663]: time="2024-12-13T02:26:23.064782578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-xlpr4,Uid:7589da60-67d7-4541-966e-87e06ffbeabb,Namespace:kube-system,Attempt:0,} returns sandbox id \"d63f6e740f6332d65b843a33dd82ca7db6be155e8c44fd8de05b91848360d377\"" Dec 13 02:26:23.064992 env[1663]: time="2024-12-13T02:26:23.064977929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-jr269,Uid:6c8de262-d9eb-4d93-893d-2b2f42f98ab1,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5e51f37dc4425a6851976107f3660f14c193d9b0fc1ddacfd72bd06942bb88d\"" Dec 13 02:26:23.065935 env[1663]: time="2024-12-13T02:26:23.065920864Z" level=info msg="CreateContainer within sandbox \"d63f6e740f6332d65b843a33dd82ca7db6be155e8c44fd8de05b91848360d377\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:26:23.065981 env[1663]: time="2024-12-13T02:26:23.065969731Z" level=info msg="CreateContainer within sandbox \"b5e51f37dc4425a6851976107f3660f14c193d9b0fc1ddacfd72bd06942bb88d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Dec 13 02:26:23.070605 env[1663]: time="2024-12-13T02:26:23.070550669Z" level=info msg="CreateContainer within sandbox \"b5e51f37dc4425a6851976107f3660f14c193d9b0fc1ddacfd72bd06942bb88d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7219330ecfca3d9dbc7228dae3905390ddbd3f4694dcd3ec0c3606ecb31fdfe9\"" Dec 13 02:26:23.070748 env[1663]: time="2024-12-13T02:26:23.070730662Z" level=info msg="StartContainer for \"7219330ecfca3d9dbc7228dae3905390ddbd3f4694dcd3ec0c3606ecb31fdfe9\"" Dec 13 02:26:23.071428 env[1663]: time="2024-12-13T02:26:23.071409420Z" level=info msg="CreateContainer within sandbox \"d63f6e740f6332d65b843a33dd82ca7db6be155e8c44fd8de05b91848360d377\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d3d3e91dde0cae6f4cb3076a1fbbc13ddd1613cd5264a711b2b0432db62ad310\"" Dec 13 02:26:23.071602 env[1663]: time="2024-12-13T02:26:23.071590459Z" level=info msg="StartContainer for \"d3d3e91dde0cae6f4cb3076a1fbbc13ddd1613cd5264a711b2b0432db62ad310\"" Dec 13 02:26:23.162255 env[1663]: time="2024-12-13T02:26:23.162215764Z" level=info msg="StartContainer for \"7219330ecfca3d9dbc7228dae3905390ddbd3f4694dcd3ec0c3606ecb31fdfe9\" returns successfully" Dec 13 02:26:23.162255 env[1663]: time="2024-12-13T02:26:23.162231047Z" level=info msg="StartContainer for \"d3d3e91dde0cae6f4cb3076a1fbbc13ddd1613cd5264a711b2b0432db62ad310\" returns successfully" Dec 13 02:26:23.226829 kubelet[2760]: I1213 02:26:23.226775 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-xlpr4" podStartSLOduration=19.226746773 podStartE2EDuration="19.226746773s" podCreationTimestamp="2024-12-13 02:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:26:23.226549712 +0000 UTC m=+33.195126977" watchObservedRunningTime="2024-12-13 02:26:23.226746773 +0000 UTC m=+33.195324034" Dec 13 02:26:23.241189 kubelet[2760]: I1213 02:26:23.241164 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-jr269" podStartSLOduration=19.241132066 podStartE2EDuration="19.241132066s" podCreationTimestamp="2024-12-13 02:26:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:26:23.240297313 +0000 UTC m=+33.208874582" watchObservedRunningTime="2024-12-13 02:26:23.241132066 +0000 UTC m=+33.209709328" Dec 13 02:32:32.305583 systemd[1]: Started sshd@7-139.178.70.53:22-139.178.68.195:43778.service. Dec 13 02:32:32.400639 sshd[4371]: Accepted publickey for core from 139.178.68.195 port 43778 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:32.402133 sshd[4371]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:32.407430 systemd-logind[1706]: New session 10 of user core. Dec 13 02:32:32.408493 systemd[1]: Started session-10.scope. Dec 13 02:32:32.503936 sshd[4371]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:32.505183 systemd[1]: sshd@7-139.178.70.53:22-139.178.68.195:43778.service: Deactivated successfully. Dec 13 02:32:32.505796 systemd-logind[1706]: Session 10 logged out. Waiting for processes to exit. Dec 13 02:32:32.505809 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 02:32:32.506308 systemd-logind[1706]: Removed session 10. Dec 13 02:32:37.511045 systemd[1]: Started sshd@8-139.178.70.53:22-139.178.68.195:49272.service. Dec 13 02:32:37.543712 sshd[4402]: Accepted publickey for core from 139.178.68.195 port 49272 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:37.544404 sshd[4402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:37.546636 systemd-logind[1706]: New session 11 of user core. Dec 13 02:32:37.547102 systemd[1]: Started session-11.scope. Dec 13 02:32:37.635076 sshd[4402]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:37.636570 systemd[1]: sshd@8-139.178.70.53:22-139.178.68.195:49272.service: Deactivated successfully. Dec 13 02:32:37.637245 systemd-logind[1706]: Session 11 logged out. Waiting for processes to exit. Dec 13 02:32:37.637278 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 02:32:37.637734 systemd-logind[1706]: Removed session 11. Dec 13 02:32:42.642030 systemd[1]: Started sshd@9-139.178.70.53:22-139.178.68.195:49286.service. Dec 13 02:32:42.675005 sshd[4430]: Accepted publickey for core from 139.178.68.195 port 49286 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:42.675864 sshd[4430]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:42.678910 systemd-logind[1706]: New session 12 of user core. Dec 13 02:32:42.679461 systemd[1]: Started session-12.scope. Dec 13 02:32:42.806284 sshd[4430]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:42.807770 systemd[1]: sshd@9-139.178.70.53:22-139.178.68.195:49286.service: Deactivated successfully. Dec 13 02:32:42.808361 systemd-logind[1706]: Session 12 logged out. Waiting for processes to exit. Dec 13 02:32:42.808395 systemd[1]: session-12.scope: Deactivated successfully. Dec 13 02:32:42.809024 systemd-logind[1706]: Removed session 12. Dec 13 02:32:47.813185 systemd[1]: Started sshd@10-139.178.70.53:22-139.178.68.195:55022.service. Dec 13 02:32:47.846222 sshd[4459]: Accepted publickey for core from 139.178.68.195 port 55022 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:47.847026 sshd[4459]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:47.849425 systemd-logind[1706]: New session 13 of user core. Dec 13 02:32:47.849897 systemd[1]: Started session-13.scope. Dec 13 02:32:47.934025 sshd[4459]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:47.935607 systemd[1]: Started sshd@11-139.178.70.53:22-139.178.68.195:55038.service. Dec 13 02:32:47.935924 systemd[1]: sshd@10-139.178.70.53:22-139.178.68.195:55022.service: Deactivated successfully. Dec 13 02:32:47.936398 systemd-logind[1706]: Session 13 logged out. Waiting for processes to exit. Dec 13 02:32:47.936440 systemd[1]: session-13.scope: Deactivated successfully. Dec 13 02:32:47.937026 systemd-logind[1706]: Removed session 13. Dec 13 02:32:47.968342 sshd[4486]: Accepted publickey for core from 139.178.68.195 port 55038 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:47.969131 sshd[4486]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:47.971740 systemd-logind[1706]: New session 14 of user core. Dec 13 02:32:47.972196 systemd[1]: Started session-14.scope. Dec 13 02:32:48.086217 sshd[4486]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:48.088437 systemd[1]: Started sshd@12-139.178.70.53:22-139.178.68.195:55048.service. Dec 13 02:32:48.088935 systemd[1]: sshd@11-139.178.70.53:22-139.178.68.195:55038.service: Deactivated successfully. Dec 13 02:32:48.089719 systemd-logind[1706]: Session 14 logged out. Waiting for processes to exit. Dec 13 02:32:48.089822 systemd[1]: session-14.scope: Deactivated successfully. Dec 13 02:32:48.090551 systemd-logind[1706]: Removed session 14. Dec 13 02:32:48.132345 sshd[4511]: Accepted publickey for core from 139.178.68.195 port 55048 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:48.133463 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:48.136924 systemd-logind[1706]: New session 15 of user core. Dec 13 02:32:48.137591 systemd[1]: Started session-15.scope. Dec 13 02:32:48.283621 sshd[4511]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:48.285282 systemd[1]: sshd@12-139.178.70.53:22-139.178.68.195:55048.service: Deactivated successfully. Dec 13 02:32:48.285961 systemd-logind[1706]: Session 15 logged out. Waiting for processes to exit. Dec 13 02:32:48.285976 systemd[1]: session-15.scope: Deactivated successfully. Dec 13 02:32:48.286660 systemd-logind[1706]: Removed session 15. Dec 13 02:32:53.290105 systemd[1]: Started sshd@13-139.178.70.53:22-139.178.68.195:55064.service. Dec 13 02:32:53.323324 sshd[4545]: Accepted publickey for core from 139.178.68.195 port 55064 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:53.324024 sshd[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:53.326498 systemd-logind[1706]: New session 16 of user core. Dec 13 02:32:53.326906 systemd[1]: Started session-16.scope. Dec 13 02:32:53.410597 sshd[4545]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:53.411903 systemd[1]: sshd@13-139.178.70.53:22-139.178.68.195:55064.service: Deactivated successfully. Dec 13 02:32:53.412475 systemd[1]: session-16.scope: Deactivated successfully. Dec 13 02:32:53.412511 systemd-logind[1706]: Session 16 logged out. Waiting for processes to exit. Dec 13 02:32:53.412975 systemd-logind[1706]: Removed session 16. Dec 13 02:32:56.116383 systemd[1]: Started sshd@14-139.178.70.53:22-92.255.85.188:24448.service. Dec 13 02:32:57.360581 sshd[4571]: Invalid user postgres from 92.255.85.188 port 24448 Dec 13 02:32:57.603812 sshd[4571]: pam_faillock(sshd:auth): User unknown Dec 13 02:32:57.604890 sshd[4571]: pam_unix(sshd:auth): check pass; user unknown Dec 13 02:32:57.604988 sshd[4571]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=92.255.85.188 Dec 13 02:32:57.606092 sshd[4571]: pam_faillock(sshd:auth): User unknown Dec 13 02:32:58.417363 systemd[1]: Started sshd@15-139.178.70.53:22-139.178.68.195:39604.service. Dec 13 02:32:58.449818 sshd[4573]: Accepted publickey for core from 139.178.68.195 port 39604 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:58.450723 sshd[4573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:58.453960 systemd-logind[1706]: New session 17 of user core. Dec 13 02:32:58.454501 systemd[1]: Started session-17.scope. Dec 13 02:32:58.542829 sshd[4573]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:58.544383 systemd[1]: Started sshd@16-139.178.70.53:22-139.178.68.195:39606.service. Dec 13 02:32:58.544724 systemd[1]: sshd@15-139.178.70.53:22-139.178.68.195:39604.service: Deactivated successfully. Dec 13 02:32:58.545270 systemd-logind[1706]: Session 17 logged out. Waiting for processes to exit. Dec 13 02:32:58.545328 systemd[1]: session-17.scope: Deactivated successfully. Dec 13 02:32:58.545907 systemd-logind[1706]: Removed session 17. Dec 13 02:32:58.577336 sshd[4598]: Accepted publickey for core from 139.178.68.195 port 39606 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:58.578074 sshd[4598]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:58.580644 systemd-logind[1706]: New session 18 of user core. Dec 13 02:32:58.581073 systemd[1]: Started session-18.scope. Dec 13 02:32:58.706117 sshd[4598]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:58.707698 systemd[1]: Started sshd@17-139.178.70.53:22-139.178.68.195:39614.service. Dec 13 02:32:58.708142 systemd[1]: sshd@16-139.178.70.53:22-139.178.68.195:39606.service: Deactivated successfully. Dec 13 02:32:58.708738 systemd[1]: session-18.scope: Deactivated successfully. Dec 13 02:32:58.708769 systemd-logind[1706]: Session 18 logged out. Waiting for processes to exit. Dec 13 02:32:58.709259 systemd-logind[1706]: Removed session 18. Dec 13 02:32:58.740874 sshd[4622]: Accepted publickey for core from 139.178.68.195 port 39614 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:58.741562 sshd[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:58.743883 systemd-logind[1706]: New session 19 of user core. Dec 13 02:32:58.744242 systemd[1]: Started session-19.scope. Dec 13 02:32:59.814548 sshd[4622]: pam_unix(sshd:session): session closed for user core Dec 13 02:32:59.817930 systemd[1]: Started sshd@18-139.178.70.53:22-139.178.68.195:39616.service. Dec 13 02:32:59.818495 systemd[1]: sshd@17-139.178.70.53:22-139.178.68.195:39614.service: Deactivated successfully. Dec 13 02:32:59.819660 systemd[1]: session-19.scope: Deactivated successfully. Dec 13 02:32:59.819667 systemd-logind[1706]: Session 19 logged out. Waiting for processes to exit. Dec 13 02:32:59.820965 systemd-logind[1706]: Removed session 19. Dec 13 02:32:59.837526 sshd[4571]: Failed password for invalid user postgres from 92.255.85.188 port 24448 ssh2 Dec 13 02:32:59.888674 sshd[4656]: Accepted publickey for core from 139.178.68.195 port 39616 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:32:59.889964 sshd[4656]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:32:59.894057 systemd-logind[1706]: New session 20 of user core. Dec 13 02:32:59.894889 systemd[1]: Started session-20.scope. Dec 13 02:33:00.086461 sshd[4656]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:00.087950 systemd[1]: Started sshd@19-139.178.70.53:22-139.178.68.195:39622.service. Dec 13 02:33:00.088211 systemd[1]: sshd@18-139.178.70.53:22-139.178.68.195:39616.service: Deactivated successfully. Dec 13 02:33:00.088733 systemd-logind[1706]: Session 20 logged out. Waiting for processes to exit. Dec 13 02:33:00.088764 systemd[1]: session-20.scope: Deactivated successfully. Dec 13 02:33:00.089175 systemd-logind[1706]: Removed session 20. Dec 13 02:33:00.120825 sshd[4684]: Accepted publickey for core from 139.178.68.195 port 39622 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:33:00.121595 sshd[4684]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:00.124113 systemd-logind[1706]: New session 21 of user core. Dec 13 02:33:00.124524 systemd[1]: Started session-21.scope. Dec 13 02:33:00.253248 sshd[4684]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:00.254718 systemd[1]: sshd@19-139.178.70.53:22-139.178.68.195:39622.service: Deactivated successfully. Dec 13 02:33:00.255346 systemd-logind[1706]: Session 21 logged out. Waiting for processes to exit. Dec 13 02:33:00.255355 systemd[1]: session-21.scope: Deactivated successfully. Dec 13 02:33:00.256007 systemd-logind[1706]: Removed session 21. Dec 13 02:33:01.766441 sshd[4571]: Connection closed by invalid user postgres 92.255.85.188 port 24448 [preauth] Dec 13 02:33:01.767880 systemd[1]: sshd@14-139.178.70.53:22-92.255.85.188:24448.service: Deactivated successfully. Dec 13 02:33:05.260147 systemd[1]: Started sshd@20-139.178.70.53:22-139.178.68.195:39632.service. Dec 13 02:33:05.292788 sshd[4722]: Accepted publickey for core from 139.178.68.195 port 39632 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:33:05.293494 sshd[4722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:05.295894 systemd-logind[1706]: New session 22 of user core. Dec 13 02:33:05.296303 systemd[1]: Started session-22.scope. Dec 13 02:33:05.378870 sshd[4722]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:05.380212 systemd[1]: sshd@20-139.178.70.53:22-139.178.68.195:39632.service: Deactivated successfully. Dec 13 02:33:05.380851 systemd-logind[1706]: Session 22 logged out. Waiting for processes to exit. Dec 13 02:33:05.380856 systemd[1]: session-22.scope: Deactivated successfully. Dec 13 02:33:05.381352 systemd-logind[1706]: Removed session 22. Dec 13 02:33:10.385488 systemd[1]: Started sshd@21-139.178.70.53:22-139.178.68.195:38896.service. Dec 13 02:33:10.417962 sshd[4747]: Accepted publickey for core from 139.178.68.195 port 38896 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:33:10.418847 sshd[4747]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:10.422090 systemd-logind[1706]: New session 23 of user core. Dec 13 02:33:10.422643 systemd[1]: Started session-23.scope. Dec 13 02:33:10.511853 sshd[4747]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:10.513239 systemd[1]: sshd@21-139.178.70.53:22-139.178.68.195:38896.service: Deactivated successfully. Dec 13 02:33:10.513872 systemd[1]: session-23.scope: Deactivated successfully. Dec 13 02:33:10.513916 systemd-logind[1706]: Session 23 logged out. Waiting for processes to exit. Dec 13 02:33:10.514415 systemd-logind[1706]: Removed session 23. Dec 13 02:33:15.513953 systemd[1]: Started sshd@22-139.178.70.53:22-139.178.68.195:38900.service. Dec 13 02:33:15.546840 sshd[4770]: Accepted publickey for core from 139.178.68.195 port 38900 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:33:15.547652 sshd[4770]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:15.550469 systemd-logind[1706]: New session 24 of user core. Dec 13 02:33:15.551036 systemd[1]: Started session-24.scope. Dec 13 02:33:15.634634 sshd[4770]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:15.636235 systemd[1]: Started sshd@23-139.178.70.53:22-139.178.68.195:38908.service. Dec 13 02:33:15.636568 systemd[1]: sshd@22-139.178.70.53:22-139.178.68.195:38900.service: Deactivated successfully. Dec 13 02:33:15.637116 systemd-logind[1706]: Session 24 logged out. Waiting for processes to exit. Dec 13 02:33:15.637158 systemd[1]: session-24.scope: Deactivated successfully. Dec 13 02:33:15.637666 systemd-logind[1706]: Removed session 24. Dec 13 02:33:15.669226 sshd[4793]: Accepted publickey for core from 139.178.68.195 port 38908 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:33:15.670036 sshd[4793]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:15.672987 systemd-logind[1706]: New session 25 of user core. Dec 13 02:33:15.673472 systemd[1]: Started session-25.scope. Dec 13 02:33:17.016626 env[1663]: time="2024-12-13T02:33:17.016516577Z" level=info msg="StopContainer for \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\" with timeout 30 (s)" Dec 13 02:33:17.017635 env[1663]: time="2024-12-13T02:33:17.017489420Z" level=info msg="Stop container \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\" with signal terminated" Dec 13 02:33:17.056092 env[1663]: time="2024-12-13T02:33:17.056002724Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 02:33:17.059790 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f-rootfs.mount: Deactivated successfully. Dec 13 02:33:17.060233 env[1663]: time="2024-12-13T02:33:17.060187823Z" level=info msg="shim disconnected" id=c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f Dec 13 02:33:17.060327 env[1663]: time="2024-12-13T02:33:17.060238761Z" level=warning msg="cleaning up after shim disconnected" id=c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f namespace=k8s.io Dec 13 02:33:17.060327 env[1663]: time="2024-12-13T02:33:17.060252412Z" level=info msg="cleaning up dead shim" Dec 13 02:33:17.061859 env[1663]: time="2024-12-13T02:33:17.061801271Z" level=info msg="StopContainer for \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\" with timeout 2 (s)" Dec 13 02:33:17.062057 env[1663]: time="2024-12-13T02:33:17.062030296Z" level=info msg="Stop container \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\" with signal terminated" Dec 13 02:33:17.067565 env[1663]: time="2024-12-13T02:33:17.067516908Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4855 runtime=io.containerd.runc.v2\n" Dec 13 02:33:17.067996 systemd-networkd[1405]: lxc_health: Link DOWN Dec 13 02:33:17.068002 systemd-networkd[1405]: lxc_health: Lost carrier Dec 13 02:33:17.068777 env[1663]: time="2024-12-13T02:33:17.068747233Z" level=info msg="StopContainer for \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\" returns successfully" Dec 13 02:33:17.069380 env[1663]: time="2024-12-13T02:33:17.069349566Z" level=info msg="StopPodSandbox for \"98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376\"" Dec 13 02:33:17.069453 env[1663]: time="2024-12-13T02:33:17.069435742Z" level=info msg="Container to stop \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:17.072280 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376-shm.mount: Deactivated successfully. Dec 13 02:33:17.088696 env[1663]: time="2024-12-13T02:33:17.088645452Z" level=info msg="shim disconnected" id=98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376 Dec 13 02:33:17.088696 env[1663]: time="2024-12-13T02:33:17.088692101Z" level=warning msg="cleaning up after shim disconnected" id=98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376 namespace=k8s.io Dec 13 02:33:17.088862 env[1663]: time="2024-12-13T02:33:17.088704603Z" level=info msg="cleaning up dead shim" Dec 13 02:33:17.088924 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376-rootfs.mount: Deactivated successfully. Dec 13 02:33:17.094103 env[1663]: time="2024-12-13T02:33:17.094074129Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4896 runtime=io.containerd.runc.v2\n" Dec 13 02:33:17.094343 env[1663]: time="2024-12-13T02:33:17.094320723Z" level=info msg="TearDown network for sandbox \"98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376\" successfully" Dec 13 02:33:17.094388 env[1663]: time="2024-12-13T02:33:17.094341630Z" level=info msg="StopPodSandbox for \"98266f50617ebbbdce29fcf39b8639e39942cecdbbd4871236ff43022dedf376\" returns successfully" Dec 13 02:33:17.133847 env[1663]: time="2024-12-13T02:33:17.133769091Z" level=info msg="shim disconnected" id=3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e Dec 13 02:33:17.133847 env[1663]: time="2024-12-13T02:33:17.133823444Z" level=warning msg="cleaning up after shim disconnected" id=3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e namespace=k8s.io Dec 13 02:33:17.133847 env[1663]: time="2024-12-13T02:33:17.133838916Z" level=info msg="cleaning up dead shim" Dec 13 02:33:17.134218 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e-rootfs.mount: Deactivated successfully. Dec 13 02:33:17.139692 env[1663]: time="2024-12-13T02:33:17.139655560Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4922 runtime=io.containerd.runc.v2\n" Dec 13 02:33:17.140845 env[1663]: time="2024-12-13T02:33:17.140786662Z" level=info msg="StopContainer for \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\" returns successfully" Dec 13 02:33:17.141252 env[1663]: time="2024-12-13T02:33:17.141204093Z" level=info msg="StopPodSandbox for \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\"" Dec 13 02:33:17.141318 env[1663]: time="2024-12-13T02:33:17.141270672Z" level=info msg="Container to stop \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:17.141318 env[1663]: time="2024-12-13T02:33:17.141289637Z" level=info msg="Container to stop \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:17.141318 env[1663]: time="2024-12-13T02:33:17.141302823Z" level=info msg="Container to stop \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:17.141471 env[1663]: time="2024-12-13T02:33:17.141316872Z" level=info msg="Container to stop \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:17.141471 env[1663]: time="2024-12-13T02:33:17.141328684Z" level=info msg="Container to stop \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Dec 13 02:33:17.159401 env[1663]: time="2024-12-13T02:33:17.159346630Z" level=info msg="shim disconnected" id=73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52 Dec 13 02:33:17.159401 env[1663]: time="2024-12-13T02:33:17.159399630Z" level=warning msg="cleaning up after shim disconnected" id=73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52 namespace=k8s.io Dec 13 02:33:17.159682 env[1663]: time="2024-12-13T02:33:17.159415312Z" level=info msg="cleaning up dead shim" Dec 13 02:33:17.165879 env[1663]: time="2024-12-13T02:33:17.165822989Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4955 runtime=io.containerd.runc.v2\n" Dec 13 02:33:17.166160 env[1663]: time="2024-12-13T02:33:17.166106624Z" level=info msg="TearDown network for sandbox \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" successfully" Dec 13 02:33:17.166160 env[1663]: time="2024-12-13T02:33:17.166132694Z" level=info msg="StopPodSandbox for \"73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52\" returns successfully" Dec 13 02:33:17.271749 kubelet[2760]: I1213 02:33:17.271516 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-bpf-maps\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.271749 kubelet[2760]: I1213 02:33:17.271647 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-xtables-lock\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.271749 kubelet[2760]: I1213 02:33:17.271651 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.273293 kubelet[2760]: I1213 02:33:17.271778 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd9fd862-f7b3-4923-9575-d8d9fa2b991e-cilium-config-path\") pod \"fd9fd862-f7b3-4923-9575-d8d9fa2b991e\" (UID: \"fd9fd862-f7b3-4923-9575-d8d9fa2b991e\") " Dec 13 02:33:17.273293 kubelet[2760]: I1213 02:33:17.271798 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.273293 kubelet[2760]: I1213 02:33:17.271884 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-etc-cni-netd\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.273293 kubelet[2760]: I1213 02:33:17.271993 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-host-proc-sys-net\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.273293 kubelet[2760]: I1213 02:33:17.271977 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.274250 kubelet[2760]: I1213 02:33:17.272047 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.274250 kubelet[2760]: I1213 02:33:17.272090 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-run\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.274250 kubelet[2760]: I1213 02:33:17.272154 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.274250 kubelet[2760]: I1213 02:33:17.272206 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45cf36e0-a940-4238-8cb7-0698c781ab88-clustermesh-secrets\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.274250 kubelet[2760]: I1213 02:33:17.272337 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rf5hc\" (UniqueName: \"kubernetes.io/projected/fd9fd862-f7b3-4923-9575-d8d9fa2b991e-kube-api-access-rf5hc\") pod \"fd9fd862-f7b3-4923-9575-d8d9fa2b991e\" (UID: \"fd9fd862-f7b3-4923-9575-d8d9fa2b991e\") " Dec 13 02:33:17.274978 kubelet[2760]: I1213 02:33:17.272461 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-cgroup\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.274978 kubelet[2760]: I1213 02:33:17.272577 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45cf36e0-a940-4238-8cb7-0698c781ab88-hubble-tls\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.274978 kubelet[2760]: I1213 02:33:17.272601 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.274978 kubelet[2760]: I1213 02:33:17.272674 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cni-path\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.274978 kubelet[2760]: I1213 02:33:17.272769 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-hostproc\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.274978 kubelet[2760]: I1213 02:33:17.272793 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cni-path" (OuterVolumeSpecName: "cni-path") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.275774 kubelet[2760]: I1213 02:33:17.272881 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-config-path\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.275774 kubelet[2760]: I1213 02:33:17.272884 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-hostproc" (OuterVolumeSpecName: "hostproc") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.275774 kubelet[2760]: I1213 02:33:17.273002 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hccqp\" (UniqueName: \"kubernetes.io/projected/45cf36e0-a940-4238-8cb7-0698c781ab88-kube-api-access-hccqp\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.275774 kubelet[2760]: I1213 02:33:17.273122 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-host-proc-sys-kernel\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.275774 kubelet[2760]: I1213 02:33:17.273222 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-lib-modules\") pod \"45cf36e0-a940-4238-8cb7-0698c781ab88\" (UID: \"45cf36e0-a940-4238-8cb7-0698c781ab88\") " Dec 13 02:33:17.276470 kubelet[2760]: I1213 02:33:17.273259 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.276470 kubelet[2760]: I1213 02:33:17.273355 2760 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cni-path\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.276470 kubelet[2760]: I1213 02:33:17.273444 2760 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-hostproc\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.276470 kubelet[2760]: I1213 02:33:17.273374 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:17.276470 kubelet[2760]: I1213 02:33:17.273511 2760 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-bpf-maps\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.276470 kubelet[2760]: I1213 02:33:17.273574 2760 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-etc-cni-netd\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.276470 kubelet[2760]: I1213 02:33:17.273639 2760 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-xtables-lock\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.277350 kubelet[2760]: I1213 02:33:17.273706 2760 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-host-proc-sys-net\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.277350 kubelet[2760]: I1213 02:33:17.273766 2760 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-run\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.277350 kubelet[2760]: I1213 02:33:17.273833 2760 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-cgroup\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.278762 kubelet[2760]: I1213 02:33:17.278655 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fd9fd862-f7b3-4923-9575-d8d9fa2b991e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "fd9fd862-f7b3-4923-9575-d8d9fa2b991e" (UID: "fd9fd862-f7b3-4923-9575-d8d9fa2b991e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:33:17.279024 kubelet[2760]: I1213 02:33:17.278965 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45cf36e0-a940-4238-8cb7-0698c781ab88-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:33:17.279283 kubelet[2760]: I1213 02:33:17.279193 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:33:17.279455 kubelet[2760]: I1213 02:33:17.279337 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fd9fd862-f7b3-4923-9575-d8d9fa2b991e-kube-api-access-rf5hc" (OuterVolumeSpecName: "kube-api-access-rf5hc") pod "fd9fd862-f7b3-4923-9575-d8d9fa2b991e" (UID: "fd9fd862-f7b3-4923-9575-d8d9fa2b991e"). InnerVolumeSpecName "kube-api-access-rf5hc". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:17.279594 kubelet[2760]: I1213 02:33:17.279412 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45cf36e0-a940-4238-8cb7-0698c781ab88-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:17.279713 kubelet[2760]: I1213 02:33:17.279648 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45cf36e0-a940-4238-8cb7-0698c781ab88-kube-api-access-hccqp" (OuterVolumeSpecName: "kube-api-access-hccqp") pod "45cf36e0-a940-4238-8cb7-0698c781ab88" (UID: "45cf36e0-a940-4238-8cb7-0698c781ab88"). InnerVolumeSpecName "kube-api-access-hccqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:17.374183 kubelet[2760]: I1213 02:33:17.374077 2760 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/45cf36e0-a940-4238-8cb7-0698c781ab88-clustermesh-secrets\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.374183 kubelet[2760]: I1213 02:33:17.374156 2760 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rf5hc\" (UniqueName: \"kubernetes.io/projected/fd9fd862-f7b3-4923-9575-d8d9fa2b991e-kube-api-access-rf5hc\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.374183 kubelet[2760]: I1213 02:33:17.374203 2760 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/45cf36e0-a940-4238-8cb7-0698c781ab88-hubble-tls\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.374718 kubelet[2760]: I1213 02:33:17.374241 2760 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-lib-modules\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.374718 kubelet[2760]: I1213 02:33:17.374273 2760 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/45cf36e0-a940-4238-8cb7-0698c781ab88-cilium-config-path\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.374718 kubelet[2760]: I1213 02:33:17.374308 2760 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-hccqp\" (UniqueName: \"kubernetes.io/projected/45cf36e0-a940-4238-8cb7-0698c781ab88-kube-api-access-hccqp\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.374718 kubelet[2760]: I1213 02:33:17.374342 2760 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/45cf36e0-a940-4238-8cb7-0698c781ab88-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.374718 kubelet[2760]: I1213 02:33:17.374375 2760 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/fd9fd862-f7b3-4923-9575-d8d9fa2b991e-cilium-config-path\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:17.434727 kubelet[2760]: I1213 02:33:17.434625 2760 scope.go:117] "RemoveContainer" containerID="3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e" Dec 13 02:33:17.437347 env[1663]: time="2024-12-13T02:33:17.437269506Z" level=info msg="RemoveContainer for \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\"" Dec 13 02:33:17.443242 env[1663]: time="2024-12-13T02:33:17.443159949Z" level=info msg="RemoveContainer for \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\" returns successfully" Dec 13 02:33:17.443354 kubelet[2760]: I1213 02:33:17.443333 2760 scope.go:117] "RemoveContainer" containerID="e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c" Dec 13 02:33:17.444295 env[1663]: time="2024-12-13T02:33:17.444255959Z" level=info msg="RemoveContainer for \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\"" Dec 13 02:33:17.445638 env[1663]: time="2024-12-13T02:33:17.445598448Z" level=info msg="RemoveContainer for \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\" returns successfully" Dec 13 02:33:17.445737 kubelet[2760]: I1213 02:33:17.445684 2760 scope.go:117] "RemoveContainer" containerID="a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b" Dec 13 02:33:17.446163 env[1663]: time="2024-12-13T02:33:17.446131536Z" level=info msg="RemoveContainer for \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\"" Dec 13 02:33:17.447296 env[1663]: time="2024-12-13T02:33:17.447259218Z" level=info msg="RemoveContainer for \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\" returns successfully" Dec 13 02:33:17.447335 kubelet[2760]: I1213 02:33:17.447315 2760 scope.go:117] "RemoveContainer" containerID="32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7" Dec 13 02:33:17.447923 env[1663]: time="2024-12-13T02:33:17.447847966Z" level=info msg="RemoveContainer for \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\"" Dec 13 02:33:17.448988 env[1663]: time="2024-12-13T02:33:17.448953665Z" level=info msg="RemoveContainer for \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\" returns successfully" Dec 13 02:33:17.449107 kubelet[2760]: I1213 02:33:17.449058 2760 scope.go:117] "RemoveContainer" containerID="2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e" Dec 13 02:33:17.449517 env[1663]: time="2024-12-13T02:33:17.449459122Z" level=info msg="RemoveContainer for \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\"" Dec 13 02:33:17.450659 env[1663]: time="2024-12-13T02:33:17.450646152Z" level=info msg="RemoveContainer for \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\" returns successfully" Dec 13 02:33:17.450741 kubelet[2760]: I1213 02:33:17.450733 2760 scope.go:117] "RemoveContainer" containerID="3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e" Dec 13 02:33:17.450889 env[1663]: time="2024-12-13T02:33:17.450821558Z" level=error msg="ContainerStatus for \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\": not found" Dec 13 02:33:17.450968 kubelet[2760]: E1213 02:33:17.450961 2760 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\": not found" containerID="3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e" Dec 13 02:33:17.451035 kubelet[2760]: I1213 02:33:17.451030 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e"} err="failed to get container status \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\": rpc error: code = NotFound desc = an error occurred when try to find container \"3ca28b0866c66feed77f2da0fd122fc37913dcdb3fcf59af3330e9144163177e\": not found" Dec 13 02:33:17.451059 kubelet[2760]: I1213 02:33:17.451038 2760 scope.go:117] "RemoveContainer" containerID="e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c" Dec 13 02:33:17.451159 env[1663]: time="2024-12-13T02:33:17.451123393Z" level=error msg="ContainerStatus for \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\": not found" Dec 13 02:33:17.451224 kubelet[2760]: E1213 02:33:17.451218 2760 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\": not found" containerID="e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c" Dec 13 02:33:17.451247 kubelet[2760]: I1213 02:33:17.451232 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c"} err="failed to get container status \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\": rpc error: code = NotFound desc = an error occurred when try to find container \"e8c57c00af1834bbed5572243cdc1a3838c702b75715e04edf59bebd8ea20e6c\": not found" Dec 13 02:33:17.451247 kubelet[2760]: I1213 02:33:17.451237 2760 scope.go:117] "RemoveContainer" containerID="a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b" Dec 13 02:33:17.451370 env[1663]: time="2024-12-13T02:33:17.451337266Z" level=error msg="ContainerStatus for \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\": not found" Dec 13 02:33:17.451414 kubelet[2760]: E1213 02:33:17.451408 2760 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\": not found" containerID="a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b" Dec 13 02:33:17.451465 kubelet[2760]: I1213 02:33:17.451427 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b"} err="failed to get container status \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\": rpc error: code = NotFound desc = an error occurred when try to find container \"a8ef85e3223eb5ff42ecb91a3c2b25f7d999f9a3d46efa3b10d8955cf118592b\": not found" Dec 13 02:33:17.451465 kubelet[2760]: I1213 02:33:17.451434 2760 scope.go:117] "RemoveContainer" containerID="32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7" Dec 13 02:33:17.451564 env[1663]: time="2024-12-13T02:33:17.451545046Z" level=error msg="ContainerStatus for \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\": not found" Dec 13 02:33:17.451621 kubelet[2760]: E1213 02:33:17.451615 2760 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\": not found" containerID="32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7" Dec 13 02:33:17.451659 kubelet[2760]: I1213 02:33:17.451629 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7"} err="failed to get container status \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"32ec867a2aa47742d0f468ed0de4daa716daa59df9b3ff86195fda0f1f1f98b7\": not found" Dec 13 02:33:17.451659 kubelet[2760]: I1213 02:33:17.451635 2760 scope.go:117] "RemoveContainer" containerID="2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e" Dec 13 02:33:17.451765 env[1663]: time="2024-12-13T02:33:17.451745996Z" level=error msg="ContainerStatus for \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\": not found" Dec 13 02:33:17.451865 kubelet[2760]: E1213 02:33:17.451858 2760 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\": not found" containerID="2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e" Dec 13 02:33:17.451908 kubelet[2760]: I1213 02:33:17.451886 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e"} err="failed to get container status \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\": rpc error: code = NotFound desc = an error occurred when try to find container \"2e00ad986b3aab3cf8cb40f5f2cf3f9f915207e532f54e8d6bc13f2e78f4227e\": not found" Dec 13 02:33:17.451908 kubelet[2760]: I1213 02:33:17.451893 2760 scope.go:117] "RemoveContainer" containerID="c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f" Dec 13 02:33:17.452295 env[1663]: time="2024-12-13T02:33:17.452285744Z" level=info msg="RemoveContainer for \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\"" Dec 13 02:33:17.453384 env[1663]: time="2024-12-13T02:33:17.453371828Z" level=info msg="RemoveContainer for \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\" returns successfully" Dec 13 02:33:17.453480 kubelet[2760]: I1213 02:33:17.453431 2760 scope.go:117] "RemoveContainer" containerID="c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f" Dec 13 02:33:17.453628 env[1663]: time="2024-12-13T02:33:17.453562498Z" level=error msg="ContainerStatus for \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\": not found" Dec 13 02:33:17.453664 kubelet[2760]: E1213 02:33:17.453641 2760 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\": not found" containerID="c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f" Dec 13 02:33:17.453664 kubelet[2760]: I1213 02:33:17.453656 2760 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f"} err="failed to get container status \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\": rpc error: code = NotFound desc = an error occurred when try to find container \"c15551230cf00eef9991e9c73cf5c4a0ee6ddf0f9589c33a8f3f2d4dbe02cb0f\": not found" Dec 13 02:33:18.035634 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52-rootfs.mount: Deactivated successfully. Dec 13 02:33:18.035711 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-73c4f35c6ef179868bfee1fe81cf31c927f548447e34496ba535221819053f52-shm.mount: Deactivated successfully. Dec 13 02:33:18.035759 systemd[1]: var-lib-kubelet-pods-fd9fd862\x2df7b3\x2d4923\x2d9575\x2dd8d9fa2b991e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drf5hc.mount: Deactivated successfully. Dec 13 02:33:18.035810 systemd[1]: var-lib-kubelet-pods-45cf36e0\x2da940\x2d4238\x2d8cb7\x2d0698c781ab88-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhccqp.mount: Deactivated successfully. Dec 13 02:33:18.035857 systemd[1]: var-lib-kubelet-pods-45cf36e0\x2da940\x2d4238\x2d8cb7\x2d0698c781ab88-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:33:18.035905 systemd[1]: var-lib-kubelet-pods-45cf36e0\x2da940\x2d4238\x2d8cb7\x2d0698c781ab88-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:33:18.115130 kubelet[2760]: I1213 02:33:18.115021 2760 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="45cf36e0-a940-4238-8cb7-0698c781ab88" path="/var/lib/kubelet/pods/45cf36e0-a940-4238-8cb7-0698c781ab88/volumes" Dec 13 02:33:18.116842 kubelet[2760]: I1213 02:33:18.116769 2760 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="fd9fd862-f7b3-4923-9575-d8d9fa2b991e" path="/var/lib/kubelet/pods/fd9fd862-f7b3-4923-9575-d8d9fa2b991e/volumes" Dec 13 02:33:18.949286 sshd[4793]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:18.952003 systemd[1]: Started sshd@24-139.178.70.53:22-139.178.68.195:34584.service. Dec 13 02:33:18.952724 systemd[1]: sshd@23-139.178.70.53:22-139.178.68.195:38908.service: Deactivated successfully. Dec 13 02:33:18.953684 systemd-logind[1706]: Session 25 logged out. Waiting for processes to exit. Dec 13 02:33:18.953774 systemd[1]: session-25.scope: Deactivated successfully. Dec 13 02:33:18.954852 systemd-logind[1706]: Removed session 25. Dec 13 02:33:18.987350 sshd[4974]: Accepted publickey for core from 139.178.68.195 port 34584 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:33:18.988168 sshd[4974]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:18.991042 systemd-logind[1706]: New session 26 of user core. Dec 13 02:33:18.991700 systemd[1]: Started session-26.scope. Dec 13 02:33:19.470864 sshd[4974]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:19.472846 systemd[1]: Started sshd@25-139.178.70.53:22-139.178.68.195:34586.service. Dec 13 02:33:19.473276 systemd[1]: sshd@24-139.178.70.53:22-139.178.68.195:34584.service: Deactivated successfully. Dec 13 02:33:19.473862 systemd-logind[1706]: Session 26 logged out. Waiting for processes to exit. Dec 13 02:33:19.473917 systemd[1]: session-26.scope: Deactivated successfully. Dec 13 02:33:19.474485 systemd-logind[1706]: Removed session 26. Dec 13 02:33:19.478658 kubelet[2760]: I1213 02:33:19.478620 2760 topology_manager.go:215] "Topology Admit Handler" podUID="dbcda04e-7ee7-40ed-a91e-94166da23ac9" podNamespace="kube-system" podName="cilium-jg66p" Dec 13 02:33:19.478658 kubelet[2760]: E1213 02:33:19.478654 2760 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="fd9fd862-f7b3-4923-9575-d8d9fa2b991e" containerName="cilium-operator" Dec 13 02:33:19.478658 kubelet[2760]: E1213 02:33:19.478661 2760 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45cf36e0-a940-4238-8cb7-0698c781ab88" containerName="clean-cilium-state" Dec 13 02:33:19.478658 kubelet[2760]: E1213 02:33:19.478666 2760 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45cf36e0-a940-4238-8cb7-0698c781ab88" containerName="mount-cgroup" Dec 13 02:33:19.478996 kubelet[2760]: E1213 02:33:19.478671 2760 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45cf36e0-a940-4238-8cb7-0698c781ab88" containerName="apply-sysctl-overwrites" Dec 13 02:33:19.478996 kubelet[2760]: E1213 02:33:19.478675 2760 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45cf36e0-a940-4238-8cb7-0698c781ab88" containerName="mount-bpf-fs" Dec 13 02:33:19.478996 kubelet[2760]: E1213 02:33:19.478678 2760 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="45cf36e0-a940-4238-8cb7-0698c781ab88" containerName="cilium-agent" Dec 13 02:33:19.478996 kubelet[2760]: I1213 02:33:19.478691 2760 memory_manager.go:354] "RemoveStaleState removing state" podUID="fd9fd862-f7b3-4923-9575-d8d9fa2b991e" containerName="cilium-operator" Dec 13 02:33:19.478996 kubelet[2760]: I1213 02:33:19.478696 2760 memory_manager.go:354] "RemoveStaleState removing state" podUID="45cf36e0-a940-4238-8cb7-0698c781ab88" containerName="cilium-agent" Dec 13 02:33:19.505886 sshd[4998]: Accepted publickey for core from 139.178.68.195 port 34586 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:33:19.506677 sshd[4998]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:19.509121 systemd-logind[1706]: New session 27 of user core. Dec 13 02:33:19.509542 systemd[1]: Started session-27.scope. Dec 13 02:33:19.597012 kubelet[2760]: I1213 02:33:19.596976 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-host-proc-sys-kernel\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597150 kubelet[2760]: I1213 02:33:19.597063 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-lib-modules\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597150 kubelet[2760]: I1213 02:33:19.597101 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cni-path\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597150 kubelet[2760]: I1213 02:33:19.597125 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-xtables-lock\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597150 kubelet[2760]: I1213 02:33:19.597147 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbcda04e-7ee7-40ed-a91e-94166da23ac9-clustermesh-secrets\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597365 kubelet[2760]: I1213 02:33:19.597198 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75rd8\" (UniqueName: \"kubernetes.io/projected/dbcda04e-7ee7-40ed-a91e-94166da23ac9-kube-api-access-75rd8\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597365 kubelet[2760]: I1213 02:33:19.597287 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-bpf-maps\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597365 kubelet[2760]: I1213 02:33:19.597330 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-ipsec-secrets\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597365 kubelet[2760]: I1213 02:33:19.597354 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbcda04e-7ee7-40ed-a91e-94166da23ac9-hubble-tls\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597563 kubelet[2760]: I1213 02:33:19.597375 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-etc-cni-netd\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597563 kubelet[2760]: I1213 02:33:19.597431 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-run\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597563 kubelet[2760]: I1213 02:33:19.597476 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-hostproc\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597563 kubelet[2760]: I1213 02:33:19.597526 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-host-proc-sys-net\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597563 kubelet[2760]: I1213 02:33:19.597554 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-config-path\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.597777 kubelet[2760]: I1213 02:33:19.597581 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-cgroup\") pod \"cilium-jg66p\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " pod="kube-system/cilium-jg66p" Dec 13 02:33:19.626823 sshd[4998]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:19.628438 systemd[1]: Started sshd@26-139.178.70.53:22-139.178.68.195:34600.service. Dec 13 02:33:19.628765 systemd[1]: sshd@25-139.178.70.53:22-139.178.68.195:34586.service: Deactivated successfully. Dec 13 02:33:19.629276 systemd-logind[1706]: Session 27 logged out. Waiting for processes to exit. Dec 13 02:33:19.629350 systemd[1]: session-27.scope: Deactivated successfully. Dec 13 02:33:19.629857 systemd-logind[1706]: Removed session 27. Dec 13 02:33:19.635628 kubelet[2760]: E1213 02:33:19.635611 2760 pod_workers.go:1298] "Error syncing pod, skipping" err="unmounted volumes=[bpf-maps cilium-cgroup cilium-config-path cilium-ipsec-secrets cilium-run clustermesh-secrets cni-path etc-cni-netd host-proc-sys-kernel host-proc-sys-net hostproc hubble-tls kube-api-access-75rd8 lib-modules xtables-lock], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-jg66p" podUID="dbcda04e-7ee7-40ed-a91e-94166da23ac9" Dec 13 02:33:19.660927 sshd[5026]: Accepted publickey for core from 139.178.68.195 port 34600 ssh2: RSA SHA256:zlnHIdneqLCn2LAFHuCmziN2krffEws9kYgisk+y46U Dec 13 02:33:19.661816 sshd[5026]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Dec 13 02:33:19.664258 systemd-logind[1706]: New session 28 of user core. Dec 13 02:33:19.664752 systemd[1]: Started session-28.scope. Dec 13 02:33:20.275122 kubelet[2760]: E1213 02:33:20.275051 2760 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:33:20.505451 kubelet[2760]: I1213 02:33:20.505372 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbcda04e-7ee7-40ed-a91e-94166da23ac9-clustermesh-secrets\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.506762 kubelet[2760]: I1213 02:33:20.505491 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-lib-modules\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.506762 kubelet[2760]: I1213 02:33:20.505560 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-xtables-lock\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.506762 kubelet[2760]: I1213 02:33:20.505632 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75rd8\" (UniqueName: \"kubernetes.io/projected/dbcda04e-7ee7-40ed-a91e-94166da23ac9-kube-api-access-75rd8\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.506762 kubelet[2760]: I1213 02:33:20.505630 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.506762 kubelet[2760]: I1213 02:33:20.505689 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-run\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.506762 kubelet[2760]: I1213 02:33:20.505699 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.507602 kubelet[2760]: I1213 02:33:20.505738 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.507602 kubelet[2760]: I1213 02:33:20.505844 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-bpf-maps\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.507602 kubelet[2760]: I1213 02:33:20.505892 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.507602 kubelet[2760]: I1213 02:33:20.505985 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-ipsec-secrets\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.507602 kubelet[2760]: I1213 02:33:20.506106 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbcda04e-7ee7-40ed-a91e-94166da23ac9-hubble-tls\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.507602 kubelet[2760]: I1213 02:33:20.506226 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-config-path\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.508193 kubelet[2760]: I1213 02:33:20.506335 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-host-proc-sys-kernel\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.508193 kubelet[2760]: I1213 02:33:20.506455 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cni-path\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.508193 kubelet[2760]: I1213 02:33:20.506460 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.508193 kubelet[2760]: I1213 02:33:20.506549 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-cgroup\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.508193 kubelet[2760]: I1213 02:33:20.506605 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cni-path" (OuterVolumeSpecName: "cni-path") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.508709 kubelet[2760]: I1213 02:33:20.506650 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-etc-cni-netd\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.508709 kubelet[2760]: I1213 02:33:20.506690 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.508709 kubelet[2760]: I1213 02:33:20.506714 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.508709 kubelet[2760]: I1213 02:33:20.506834 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-host-proc-sys-net\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.508709 kubelet[2760]: I1213 02:33:20.506911 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.509204 kubelet[2760]: I1213 02:33:20.506945 2760 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-hostproc\") pod \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\" (UID: \"dbcda04e-7ee7-40ed-a91e-94166da23ac9\") " Dec 13 02:33:20.509204 kubelet[2760]: I1213 02:33:20.507026 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-hostproc" (OuterVolumeSpecName: "hostproc") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Dec 13 02:33:20.509204 kubelet[2760]: I1213 02:33:20.507086 2760 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cni-path\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.509204 kubelet[2760]: I1213 02:33:20.507155 2760 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-cgroup\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.509204 kubelet[2760]: I1213 02:33:20.507216 2760 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-etc-cni-netd\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.509204 kubelet[2760]: I1213 02:33:20.507277 2760 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-host-proc-sys-net\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.509204 kubelet[2760]: I1213 02:33:20.507338 2760 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-lib-modules\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.509907 kubelet[2760]: I1213 02:33:20.507407 2760 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-xtables-lock\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.509907 kubelet[2760]: I1213 02:33:20.507496 2760 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-run\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.509907 kubelet[2760]: I1213 02:33:20.507556 2760 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-bpf-maps\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.509907 kubelet[2760]: I1213 02:33:20.507615 2760 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-host-proc-sys-kernel\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.511159 kubelet[2760]: I1213 02:33:20.511091 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Dec 13 02:33:20.512159 kubelet[2760]: I1213 02:33:20.512143 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcda04e-7ee7-40ed-a91e-94166da23ac9-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:20.512210 kubelet[2760]: I1213 02:33:20.512196 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcda04e-7ee7-40ed-a91e-94166da23ac9-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:33:20.512235 kubelet[2760]: I1213 02:33:20.512204 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbcda04e-7ee7-40ed-a91e-94166da23ac9-kube-api-access-75rd8" (OuterVolumeSpecName: "kube-api-access-75rd8") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "kube-api-access-75rd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Dec 13 02:33:20.512336 kubelet[2760]: I1213 02:33:20.512323 2760 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dbcda04e-7ee7-40ed-a91e-94166da23ac9" (UID: "dbcda04e-7ee7-40ed-a91e-94166da23ac9"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Dec 13 02:33:20.513326 systemd[1]: var-lib-kubelet-pods-dbcda04e\x2d7ee7\x2d40ed\x2da91e\x2d94166da23ac9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d75rd8.mount: Deactivated successfully. Dec 13 02:33:20.513406 systemd[1]: var-lib-kubelet-pods-dbcda04e\x2d7ee7\x2d40ed\x2da91e\x2d94166da23ac9-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Dec 13 02:33:20.513467 systemd[1]: var-lib-kubelet-pods-dbcda04e\x2d7ee7\x2d40ed\x2da91e\x2d94166da23ac9-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Dec 13 02:33:20.513516 systemd[1]: var-lib-kubelet-pods-dbcda04e\x2d7ee7\x2d40ed\x2da91e\x2d94166da23ac9-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Dec 13 02:33:20.608908 kubelet[2760]: I1213 02:33:20.608689 2760 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-75rd8\" (UniqueName: \"kubernetes.io/projected/dbcda04e-7ee7-40ed-a91e-94166da23ac9-kube-api-access-75rd8\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.608908 kubelet[2760]: I1213 02:33:20.608769 2760 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-ipsec-secrets\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.608908 kubelet[2760]: I1213 02:33:20.608807 2760 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dbcda04e-7ee7-40ed-a91e-94166da23ac9-hubble-tls\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.608908 kubelet[2760]: I1213 02:33:20.608845 2760 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dbcda04e-7ee7-40ed-a91e-94166da23ac9-cilium-config-path\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.608908 kubelet[2760]: I1213 02:33:20.608879 2760 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dbcda04e-7ee7-40ed-a91e-94166da23ac9-hostproc\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:20.608908 kubelet[2760]: I1213 02:33:20.608913 2760 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dbcda04e-7ee7-40ed-a91e-94166da23ac9-clustermesh-secrets\") on node \"ci-3510.3.6-a-cefcb26589\" DevicePath \"\"" Dec 13 02:33:21.484285 kubelet[2760]: I1213 02:33:21.484187 2760 topology_manager.go:215] "Topology Admit Handler" podUID="c6617fcf-8796-4f7e-ba82-a91a2f9e5790" podNamespace="kube-system" podName="cilium-8gvln" Dec 13 02:33:21.514894 kubelet[2760]: I1213 02:33:21.514846 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-hostproc\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.515646 kubelet[2760]: I1213 02:33:21.514922 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-cilium-cgroup\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.515646 kubelet[2760]: I1213 02:33:21.515060 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-cilium-config-path\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.515646 kubelet[2760]: I1213 02:33:21.515207 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-bpf-maps\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.515646 kubelet[2760]: I1213 02:33:21.515260 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxgrp\" (UniqueName: \"kubernetes.io/projected/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-kube-api-access-wxgrp\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.515646 kubelet[2760]: I1213 02:33:21.515393 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-clustermesh-secrets\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516104 kubelet[2760]: I1213 02:33:21.515487 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-host-proc-sys-net\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516104 kubelet[2760]: I1213 02:33:21.515541 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-host-proc-sys-kernel\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516104 kubelet[2760]: I1213 02:33:21.515672 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-xtables-lock\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516104 kubelet[2760]: I1213 02:33:21.515783 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-cilium-run\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516104 kubelet[2760]: I1213 02:33:21.515868 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-etc-cni-netd\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516104 kubelet[2760]: I1213 02:33:21.515975 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-hubble-tls\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516580 kubelet[2760]: I1213 02:33:21.516048 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-cni-path\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516580 kubelet[2760]: I1213 02:33:21.516097 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-lib-modules\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.516580 kubelet[2760]: I1213 02:33:21.516141 2760 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/c6617fcf-8796-4f7e-ba82-a91a2f9e5790-cilium-ipsec-secrets\") pod \"cilium-8gvln\" (UID: \"c6617fcf-8796-4f7e-ba82-a91a2f9e5790\") " pod="kube-system/cilium-8gvln" Dec 13 02:33:21.793618 env[1663]: time="2024-12-13T02:33:21.793366611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gvln,Uid:c6617fcf-8796-4f7e-ba82-a91a2f9e5790,Namespace:kube-system,Attempt:0,}" Dec 13 02:33:21.815406 env[1663]: time="2024-12-13T02:33:21.815223587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 02:33:21.815406 env[1663]: time="2024-12-13T02:33:21.815324946Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 02:33:21.815406 env[1663]: time="2024-12-13T02:33:21.815373380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 02:33:21.815914 env[1663]: time="2024-12-13T02:33:21.815759345Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae pid=5068 runtime=io.containerd.runc.v2 Dec 13 02:33:21.871052 env[1663]: time="2024-12-13T02:33:21.871001891Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-8gvln,Uid:c6617fcf-8796-4f7e-ba82-a91a2f9e5790,Namespace:kube-system,Attempt:0,} returns sandbox id \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\"" Dec 13 02:33:21.874085 env[1663]: time="2024-12-13T02:33:21.874013932Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Dec 13 02:33:21.882479 env[1663]: time="2024-12-13T02:33:21.882408217Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bac9418cd78c5906f6d0a411e0bfc647d4df64c6c6e14da5f21c139cc9d71ff4\"" Dec 13 02:33:21.883034 env[1663]: time="2024-12-13T02:33:21.882953059Z" level=info msg="StartContainer for \"bac9418cd78c5906f6d0a411e0bfc647d4df64c6c6e14da5f21c139cc9d71ff4\"" Dec 13 02:33:21.933793 env[1663]: time="2024-12-13T02:33:21.933736288Z" level=info msg="StartContainer for \"bac9418cd78c5906f6d0a411e0bfc647d4df64c6c6e14da5f21c139cc9d71ff4\" returns successfully" Dec 13 02:33:21.978510 env[1663]: time="2024-12-13T02:33:21.978415298Z" level=info msg="shim disconnected" id=bac9418cd78c5906f6d0a411e0bfc647d4df64c6c6e14da5f21c139cc9d71ff4 Dec 13 02:33:21.978762 env[1663]: time="2024-12-13T02:33:21.978510722Z" level=warning msg="cleaning up after shim disconnected" id=bac9418cd78c5906f6d0a411e0bfc647d4df64c6c6e14da5f21c139cc9d71ff4 namespace=k8s.io Dec 13 02:33:21.978762 env[1663]: time="2024-12-13T02:33:21.978532387Z" level=info msg="cleaning up dead shim" Dec 13 02:33:21.990197 env[1663]: time="2024-12-13T02:33:21.990091767Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5150 runtime=io.containerd.runc.v2\n" Dec 13 02:33:22.115086 kubelet[2760]: I1213 02:33:22.114919 2760 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dbcda04e-7ee7-40ed-a91e-94166da23ac9" path="/var/lib/kubelet/pods/dbcda04e-7ee7-40ed-a91e-94166da23ac9/volumes" Dec 13 02:33:22.464903 env[1663]: time="2024-12-13T02:33:22.464808160Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Dec 13 02:33:22.477241 env[1663]: time="2024-12-13T02:33:22.477118409Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"a60bc7041845a6be92bcb5a2cc6f2d069edb2a1d503214ebb8fa99603b96f573\"" Dec 13 02:33:22.478146 env[1663]: time="2024-12-13T02:33:22.478029561Z" level=info msg="StartContainer for \"a60bc7041845a6be92bcb5a2cc6f2d069edb2a1d503214ebb8fa99603b96f573\"" Dec 13 02:33:22.541161 env[1663]: time="2024-12-13T02:33:22.541089701Z" level=info msg="StartContainer for \"a60bc7041845a6be92bcb5a2cc6f2d069edb2a1d503214ebb8fa99603b96f573\" returns successfully" Dec 13 02:33:22.566532 env[1663]: time="2024-12-13T02:33:22.566452618Z" level=info msg="shim disconnected" id=a60bc7041845a6be92bcb5a2cc6f2d069edb2a1d503214ebb8fa99603b96f573 Dec 13 02:33:22.566532 env[1663]: time="2024-12-13T02:33:22.566503435Z" level=warning msg="cleaning up after shim disconnected" id=a60bc7041845a6be92bcb5a2cc6f2d069edb2a1d503214ebb8fa99603b96f573 namespace=k8s.io Dec 13 02:33:22.566532 env[1663]: time="2024-12-13T02:33:22.566520072Z" level=info msg="cleaning up dead shim" Dec 13 02:33:22.574020 env[1663]: time="2024-12-13T02:33:22.573980019Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5211 runtime=io.containerd.runc.v2\n" Dec 13 02:33:23.472819 env[1663]: time="2024-12-13T02:33:23.472581729Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Dec 13 02:33:23.482563 env[1663]: time="2024-12-13T02:33:23.482455858Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"5a708e1e6f008dd58e7e32c00ed6de0db04cb9fc7bb3f440bdbb4e36d0839c55\"" Dec 13 02:33:23.483117 env[1663]: time="2024-12-13T02:33:23.483030449Z" level=info msg="StartContainer for \"5a708e1e6f008dd58e7e32c00ed6de0db04cb9fc7bb3f440bdbb4e36d0839c55\"" Dec 13 02:33:23.507042 env[1663]: time="2024-12-13T02:33:23.507014563Z" level=info msg="StartContainer for \"5a708e1e6f008dd58e7e32c00ed6de0db04cb9fc7bb3f440bdbb4e36d0839c55\" returns successfully" Dec 13 02:33:23.518033 env[1663]: time="2024-12-13T02:33:23.517969471Z" level=info msg="shim disconnected" id=5a708e1e6f008dd58e7e32c00ed6de0db04cb9fc7bb3f440bdbb4e36d0839c55 Dec 13 02:33:23.518033 env[1663]: time="2024-12-13T02:33:23.518002558Z" level=warning msg="cleaning up after shim disconnected" id=5a708e1e6f008dd58e7e32c00ed6de0db04cb9fc7bb3f440bdbb4e36d0839c55 namespace=k8s.io Dec 13 02:33:23.518033 env[1663]: time="2024-12-13T02:33:23.518009407Z" level=info msg="cleaning up dead shim" Dec 13 02:33:23.522072 env[1663]: time="2024-12-13T02:33:23.522027919Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5266 runtime=io.containerd.runc.v2\n" Dec 13 02:33:23.629100 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5a708e1e6f008dd58e7e32c00ed6de0db04cb9fc7bb3f440bdbb4e36d0839c55-rootfs.mount: Deactivated successfully. Dec 13 02:33:24.481001 env[1663]: time="2024-12-13T02:33:24.480892289Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Dec 13 02:33:24.490782 env[1663]: time="2024-12-13T02:33:24.490762127Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"4b59e652d9affab117118aa5d7743b8788210ced76450df8fb88d55e9d3095d7\"" Dec 13 02:33:24.491298 env[1663]: time="2024-12-13T02:33:24.491251009Z" level=info msg="StartContainer for \"4b59e652d9affab117118aa5d7743b8788210ced76450df8fb88d55e9d3095d7\"" Dec 13 02:33:24.512950 env[1663]: time="2024-12-13T02:33:24.512894037Z" level=info msg="StartContainer for \"4b59e652d9affab117118aa5d7743b8788210ced76450df8fb88d55e9d3095d7\" returns successfully" Dec 13 02:33:24.521533 env[1663]: time="2024-12-13T02:33:24.521505204Z" level=info msg="shim disconnected" id=4b59e652d9affab117118aa5d7743b8788210ced76450df8fb88d55e9d3095d7 Dec 13 02:33:24.521533 env[1663]: time="2024-12-13T02:33:24.521532233Z" level=warning msg="cleaning up after shim disconnected" id=4b59e652d9affab117118aa5d7743b8788210ced76450df8fb88d55e9d3095d7 namespace=k8s.io Dec 13 02:33:24.521533 env[1663]: time="2024-12-13T02:33:24.521538835Z" level=info msg="cleaning up dead shim" Dec 13 02:33:24.525168 env[1663]: time="2024-12-13T02:33:24.525114679Z" level=warning msg="cleanup warnings time=\"2024-12-13T02:33:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5320 runtime=io.containerd.runc.v2\n" Dec 13 02:33:24.629115 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b59e652d9affab117118aa5d7743b8788210ced76450df8fb88d55e9d3095d7-rootfs.mount: Deactivated successfully. Dec 13 02:33:25.277315 kubelet[2760]: E1213 02:33:25.277189 2760 kubelet.go:2892] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Dec 13 02:33:25.491040 env[1663]: time="2024-12-13T02:33:25.490953630Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Dec 13 02:33:25.518241 env[1663]: time="2024-12-13T02:33:25.518219104Z" level=info msg="CreateContainer within sandbox \"95f77db63d71af7002d2dfe9fd91a0f511caf8b263eb59a7af43e591599624ae\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"64c0252efe10b1eac2714b993961f5be3c4e0c820f30c7bae3e5b676b7af0b69\"" Dec 13 02:33:25.518629 env[1663]: time="2024-12-13T02:33:25.518610657Z" level=info msg="StartContainer for \"64c0252efe10b1eac2714b993961f5be3c4e0c820f30c7bae3e5b676b7af0b69\"" Dec 13 02:33:25.520559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3123511156.mount: Deactivated successfully. Dec 13 02:33:25.540102 env[1663]: time="2024-12-13T02:33:25.540049101Z" level=info msg="StartContainer for \"64c0252efe10b1eac2714b993961f5be3c4e0c820f30c7bae3e5b676b7af0b69\" returns successfully" Dec 13 02:33:25.687426 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Dec 13 02:33:26.514067 kubelet[2760]: I1213 02:33:26.514024 2760 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-8gvln" podStartSLOduration=5.514001399 podStartE2EDuration="5.514001399s" podCreationTimestamp="2024-12-13 02:33:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-12-13 02:33:26.513869109 +0000 UTC m=+456.482446367" watchObservedRunningTime="2024-12-13 02:33:26.514001399 +0000 UTC m=+456.482578653" Dec 13 02:33:28.702436 systemd-networkd[1405]: lxc_health: Link UP Dec 13 02:33:28.720334 systemd-networkd[1405]: lxc_health: Gained carrier Dec 13 02:33:28.720488 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Dec 13 02:33:29.091558 kubelet[2760]: I1213 02:33:29.091413 2760 setters.go:568] "Node became not ready" node="ci-3510.3.6-a-cefcb26589" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-12-13T02:33:29Z","lastTransitionTime":"2024-12-13T02:33:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Dec 13 02:33:30.483557 systemd-networkd[1405]: lxc_health: Gained IPv6LL Dec 13 02:33:34.294331 sshd[5026]: pam_unix(sshd:session): session closed for user core Dec 13 02:33:34.296453 systemd[1]: sshd@26-139.178.70.53:22-139.178.68.195:34600.service: Deactivated successfully. Dec 13 02:33:34.297292 systemd-logind[1706]: Session 28 logged out. Waiting for processes to exit. Dec 13 02:33:34.297342 systemd[1]: session-28.scope: Deactivated successfully. Dec 13 02:33:34.298095 systemd-logind[1706]: Removed session 28.