Feb 13 19:17:04.757523 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 19:17:04.757543 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:17:04.757550 kernel: Disabled fast string operations Feb 13 19:17:04.757554 kernel: BIOS-provided physical RAM map: Feb 13 19:17:04.757558 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Feb 13 19:17:04.757563 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Feb 13 19:17:04.757569 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Feb 13 19:17:04.757573 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Feb 13 19:17:04.758259 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Feb 13 19:17:04.758268 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Feb 13 19:17:04.758273 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Feb 13 19:17:04.758277 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Feb 13 19:17:04.758282 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Feb 13 19:17:04.758287 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Feb 13 19:17:04.758295 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Feb 13 19:17:04.758300 kernel: NX (Execute Disable) protection: active Feb 13 19:17:04.758306 kernel: APIC: Static calls initialized Feb 13 19:17:04.758312 kernel: SMBIOS 2.7 present. Feb 13 19:17:04.758317 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Feb 13 19:17:04.758322 kernel: vmware: hypercall mode: 0x00 Feb 13 19:17:04.758327 kernel: Hypervisor detected: VMware Feb 13 19:17:04.758332 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Feb 13 19:17:04.758338 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Feb 13 19:17:04.758343 kernel: vmware: using clock offset of 2485724256 ns Feb 13 19:17:04.758348 kernel: tsc: Detected 3408.000 MHz processor Feb 13 19:17:04.758354 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 19:17:04.758359 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 19:17:04.758364 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Feb 13 19:17:04.758369 kernel: total RAM covered: 3072M Feb 13 19:17:04.758374 kernel: Found optimal setting for mtrr clean up Feb 13 19:17:04.758380 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Feb 13 19:17:04.758386 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Feb 13 19:17:04.758394 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 19:17:04.758401 kernel: Using GB pages for direct mapping Feb 13 19:17:04.758406 kernel: ACPI: Early table checksum verification disabled Feb 13 19:17:04.758411 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Feb 13 19:17:04.758417 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Feb 13 19:17:04.758423 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Feb 13 19:17:04.758428 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Feb 13 19:17:04.758434 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 13 19:17:04.758442 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Feb 13 19:17:04.758447 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Feb 13 19:17:04.758452 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Feb 13 19:17:04.758458 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Feb 13 19:17:04.758463 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Feb 13 19:17:04.758471 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Feb 13 19:17:04.758480 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Feb 13 19:17:04.758486 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Feb 13 19:17:04.758492 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Feb 13 19:17:04.758498 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 13 19:17:04.758505 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Feb 13 19:17:04.758512 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Feb 13 19:17:04.758518 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Feb 13 19:17:04.758524 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Feb 13 19:17:04.758532 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Feb 13 19:17:04.758541 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Feb 13 19:17:04.758546 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Feb 13 19:17:04.758551 kernel: system APIC only can use physical flat Feb 13 19:17:04.758557 kernel: APIC: Switched APIC routing to: physical flat Feb 13 19:17:04.758563 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 19:17:04.758570 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 19:17:04.758575 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 19:17:04.758638 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 19:17:04.758643 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 19:17:04.758648 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 19:17:04.758655 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 19:17:04.758660 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 19:17:04.758666 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Feb 13 19:17:04.758671 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Feb 13 19:17:04.758676 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Feb 13 19:17:04.758681 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Feb 13 19:17:04.758686 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Feb 13 19:17:04.758691 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Feb 13 19:17:04.758696 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Feb 13 19:17:04.758701 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Feb 13 19:17:04.758707 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Feb 13 19:17:04.758713 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Feb 13 19:17:04.758718 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Feb 13 19:17:04.758723 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Feb 13 19:17:04.758728 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Feb 13 19:17:04.758733 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Feb 13 19:17:04.758738 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Feb 13 19:17:04.758743 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Feb 13 19:17:04.758749 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Feb 13 19:17:04.758754 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Feb 13 19:17:04.758760 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Feb 13 19:17:04.758765 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Feb 13 19:17:04.758770 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Feb 13 19:17:04.758776 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Feb 13 19:17:04.758781 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Feb 13 19:17:04.758786 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Feb 13 19:17:04.758791 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Feb 13 19:17:04.758796 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Feb 13 19:17:04.758801 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Feb 13 19:17:04.758806 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Feb 13 19:17:04.758812 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Feb 13 19:17:04.758817 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Feb 13 19:17:04.758822 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Feb 13 19:17:04.758828 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Feb 13 19:17:04.758833 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Feb 13 19:17:04.758837 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Feb 13 19:17:04.758843 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Feb 13 19:17:04.758848 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Feb 13 19:17:04.758853 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Feb 13 19:17:04.758858 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Feb 13 19:17:04.758863 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Feb 13 19:17:04.758869 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Feb 13 19:17:04.758874 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Feb 13 19:17:04.758880 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Feb 13 19:17:04.758885 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Feb 13 19:17:04.758890 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Feb 13 19:17:04.758895 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Feb 13 19:17:04.758900 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Feb 13 19:17:04.758905 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Feb 13 19:17:04.758910 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Feb 13 19:17:04.758915 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Feb 13 19:17:04.758921 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Feb 13 19:17:04.758927 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Feb 13 19:17:04.758936 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Feb 13 19:17:04.758942 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Feb 13 19:17:04.758948 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Feb 13 19:17:04.758953 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Feb 13 19:17:04.758958 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Feb 13 19:17:04.758964 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Feb 13 19:17:04.758969 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Feb 13 19:17:04.758976 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Feb 13 19:17:04.758982 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Feb 13 19:17:04.758987 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Feb 13 19:17:04.758993 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Feb 13 19:17:04.758998 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Feb 13 19:17:04.759004 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Feb 13 19:17:04.759009 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Feb 13 19:17:04.759015 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Feb 13 19:17:04.759020 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Feb 13 19:17:04.759026 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Feb 13 19:17:04.759032 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Feb 13 19:17:04.759038 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Feb 13 19:17:04.759043 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Feb 13 19:17:04.759048 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Feb 13 19:17:04.759054 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Feb 13 19:17:04.759059 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Feb 13 19:17:04.759065 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Feb 13 19:17:04.759070 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Feb 13 19:17:04.759076 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Feb 13 19:17:04.759081 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Feb 13 19:17:04.759087 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Feb 13 19:17:04.759093 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Feb 13 19:17:04.759099 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Feb 13 19:17:04.759104 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Feb 13 19:17:04.759110 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Feb 13 19:17:04.759115 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Feb 13 19:17:04.759120 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Feb 13 19:17:04.759126 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Feb 13 19:17:04.759131 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Feb 13 19:17:04.759137 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Feb 13 19:17:04.759143 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Feb 13 19:17:04.759149 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Feb 13 19:17:04.759155 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Feb 13 19:17:04.759160 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Feb 13 19:17:04.759166 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Feb 13 19:17:04.759171 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Feb 13 19:17:04.759176 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Feb 13 19:17:04.759182 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Feb 13 19:17:04.759187 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Feb 13 19:17:04.759193 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Feb 13 19:17:04.759198 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Feb 13 19:17:04.759204 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Feb 13 19:17:04.759210 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Feb 13 19:17:04.759215 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Feb 13 19:17:04.759220 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Feb 13 19:17:04.759227 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Feb 13 19:17:04.759236 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Feb 13 19:17:04.759245 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Feb 13 19:17:04.759252 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Feb 13 19:17:04.759258 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Feb 13 19:17:04.759263 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Feb 13 19:17:04.759270 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Feb 13 19:17:04.759276 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Feb 13 19:17:04.759281 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Feb 13 19:17:04.759286 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Feb 13 19:17:04.759291 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Feb 13 19:17:04.759297 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Feb 13 19:17:04.759303 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Feb 13 19:17:04.759308 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Feb 13 19:17:04.759314 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Feb 13 19:17:04.759319 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Feb 13 19:17:04.759326 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Feb 13 19:17:04.759331 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 19:17:04.759337 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 19:17:04.759343 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Feb 13 19:17:04.759348 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Feb 13 19:17:04.759354 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Feb 13 19:17:04.759360 kernel: Zone ranges: Feb 13 19:17:04.759366 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 19:17:04.759371 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Feb 13 19:17:04.759378 kernel: Normal empty Feb 13 19:17:04.759386 kernel: Movable zone start for each node Feb 13 19:17:04.759392 kernel: Early memory node ranges Feb 13 19:17:04.759398 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Feb 13 19:17:04.759403 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Feb 13 19:17:04.759409 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Feb 13 19:17:04.759414 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Feb 13 19:17:04.759420 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 19:17:04.759425 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Feb 13 19:17:04.759431 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Feb 13 19:17:04.759438 kernel: ACPI: PM-Timer IO Port: 0x1008 Feb 13 19:17:04.759443 kernel: system APIC only can use physical flat Feb 13 19:17:04.759449 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Feb 13 19:17:04.759454 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Feb 13 19:17:04.759460 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Feb 13 19:17:04.759466 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Feb 13 19:17:04.759471 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Feb 13 19:17:04.759477 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Feb 13 19:17:04.759482 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Feb 13 19:17:04.759488 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Feb 13 19:17:04.759494 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Feb 13 19:17:04.759500 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Feb 13 19:17:04.759505 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Feb 13 19:17:04.759511 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Feb 13 19:17:04.759516 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Feb 13 19:17:04.759522 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Feb 13 19:17:04.759527 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Feb 13 19:17:04.759533 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Feb 13 19:17:04.759538 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Feb 13 19:17:04.759543 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Feb 13 19:17:04.759550 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Feb 13 19:17:04.759556 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Feb 13 19:17:04.759561 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Feb 13 19:17:04.759567 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Feb 13 19:17:04.759572 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Feb 13 19:17:04.759585 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Feb 13 19:17:04.759597 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Feb 13 19:17:04.759603 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Feb 13 19:17:04.759609 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Feb 13 19:17:04.759616 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Feb 13 19:17:04.759622 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Feb 13 19:17:04.759627 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Feb 13 19:17:04.759633 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Feb 13 19:17:04.759638 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Feb 13 19:17:04.759644 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Feb 13 19:17:04.759649 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Feb 13 19:17:04.759655 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Feb 13 19:17:04.759661 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Feb 13 19:17:04.759666 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Feb 13 19:17:04.759673 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Feb 13 19:17:04.759679 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Feb 13 19:17:04.759684 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Feb 13 19:17:04.759690 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Feb 13 19:17:04.759695 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Feb 13 19:17:04.759701 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Feb 13 19:17:04.759706 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Feb 13 19:17:04.759712 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Feb 13 19:17:04.759717 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Feb 13 19:17:04.759722 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Feb 13 19:17:04.759729 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Feb 13 19:17:04.759734 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Feb 13 19:17:04.759740 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Feb 13 19:17:04.759745 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Feb 13 19:17:04.759751 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Feb 13 19:17:04.759756 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Feb 13 19:17:04.759762 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Feb 13 19:17:04.759767 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Feb 13 19:17:04.759772 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Feb 13 19:17:04.759778 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Feb 13 19:17:04.759784 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Feb 13 19:17:04.759790 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Feb 13 19:17:04.759795 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Feb 13 19:17:04.759801 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Feb 13 19:17:04.759807 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Feb 13 19:17:04.759814 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Feb 13 19:17:04.759820 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Feb 13 19:17:04.759826 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Feb 13 19:17:04.759832 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Feb 13 19:17:04.759839 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Feb 13 19:17:04.759848 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Feb 13 19:17:04.759855 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Feb 13 19:17:04.759861 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Feb 13 19:17:04.759869 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Feb 13 19:17:04.759874 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Feb 13 19:17:04.759880 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Feb 13 19:17:04.759886 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Feb 13 19:17:04.759892 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Feb 13 19:17:04.759897 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Feb 13 19:17:04.759904 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Feb 13 19:17:04.759910 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Feb 13 19:17:04.759917 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Feb 13 19:17:04.759923 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Feb 13 19:17:04.759929 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Feb 13 19:17:04.759936 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Feb 13 19:17:04.759943 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Feb 13 19:17:04.759950 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Feb 13 19:17:04.759956 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Feb 13 19:17:04.759962 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Feb 13 19:17:04.759972 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Feb 13 19:17:04.759979 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Feb 13 19:17:04.759987 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Feb 13 19:17:04.759993 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Feb 13 19:17:04.760000 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Feb 13 19:17:04.760006 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Feb 13 19:17:04.760013 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Feb 13 19:17:04.760019 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Feb 13 19:17:04.760026 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Feb 13 19:17:04.760032 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Feb 13 19:17:04.760040 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Feb 13 19:17:04.760046 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Feb 13 19:17:04.760051 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Feb 13 19:17:04.760057 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Feb 13 19:17:04.760062 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Feb 13 19:17:04.760068 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Feb 13 19:17:04.760073 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Feb 13 19:17:04.760079 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Feb 13 19:17:04.760084 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Feb 13 19:17:04.760091 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Feb 13 19:17:04.760097 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Feb 13 19:17:04.760102 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Feb 13 19:17:04.760108 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Feb 13 19:17:04.760113 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Feb 13 19:17:04.760119 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Feb 13 19:17:04.760124 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Feb 13 19:17:04.760130 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Feb 13 19:17:04.760136 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Feb 13 19:17:04.760141 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Feb 13 19:17:04.760148 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Feb 13 19:17:04.760153 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Feb 13 19:17:04.760159 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Feb 13 19:17:04.760164 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Feb 13 19:17:04.760170 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Feb 13 19:17:04.760175 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Feb 13 19:17:04.760181 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Feb 13 19:17:04.760186 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Feb 13 19:17:04.760192 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Feb 13 19:17:04.760197 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Feb 13 19:17:04.760204 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Feb 13 19:17:04.760209 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Feb 13 19:17:04.760215 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Feb 13 19:17:04.760220 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Feb 13 19:17:04.760226 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Feb 13 19:17:04.760232 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 19:17:04.760238 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Feb 13 19:17:04.760246 kernel: TSC deadline timer available Feb 13 19:17:04.760255 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Feb 13 19:17:04.760263 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Feb 13 19:17:04.760268 kernel: Booting paravirtualized kernel on VMware hypervisor Feb 13 19:17:04.760274 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 19:17:04.760280 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Feb 13 19:17:04.760286 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 19:17:04.760291 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 19:17:04.760297 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Feb 13 19:17:04.760302 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Feb 13 19:17:04.760308 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Feb 13 19:17:04.760314 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Feb 13 19:17:04.760320 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Feb 13 19:17:04.760333 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Feb 13 19:17:04.760340 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Feb 13 19:17:04.760346 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Feb 13 19:17:04.760352 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Feb 13 19:17:04.760358 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Feb 13 19:17:04.760363 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Feb 13 19:17:04.760369 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Feb 13 19:17:04.760377 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Feb 13 19:17:04.760382 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Feb 13 19:17:04.760388 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Feb 13 19:17:04.760394 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Feb 13 19:17:04.760400 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:17:04.760407 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 19:17:04.760412 kernel: random: crng init done Feb 13 19:17:04.760418 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Feb 13 19:17:04.760425 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Feb 13 19:17:04.760431 kernel: printk: log_buf_len min size: 262144 bytes Feb 13 19:17:04.760437 kernel: printk: log_buf_len: 1048576 bytes Feb 13 19:17:04.760443 kernel: printk: early log buf free: 239648(91%) Feb 13 19:17:04.760449 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 19:17:04.760455 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 19:17:04.760462 kernel: Fallback order for Node 0: 0 Feb 13 19:17:04.760469 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Feb 13 19:17:04.760477 kernel: Policy zone: DMA32 Feb 13 19:17:04.760484 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 19:17:04.760497 kernel: Memory: 1934316K/2096628K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 162052K reserved, 0K cma-reserved) Feb 13 19:17:04.760516 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Feb 13 19:17:04.760538 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 19:17:04.760547 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 19:17:04.760557 kernel: Dynamic Preempt: voluntary Feb 13 19:17:04.760575 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 19:17:04.760593 kernel: rcu: RCU event tracing is enabled. Feb 13 19:17:04.760600 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Feb 13 19:17:04.760606 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 19:17:04.760612 kernel: Rude variant of Tasks RCU enabled. Feb 13 19:17:04.760618 kernel: Tracing variant of Tasks RCU enabled. Feb 13 19:17:04.760623 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 19:17:04.760629 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Feb 13 19:17:04.760635 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Feb 13 19:17:04.760644 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Feb 13 19:17:04.760650 kernel: Console: colour VGA+ 80x25 Feb 13 19:17:04.760655 kernel: printk: console [tty0] enabled Feb 13 19:17:04.760661 kernel: printk: console [ttyS0] enabled Feb 13 19:17:04.760667 kernel: ACPI: Core revision 20230628 Feb 13 19:17:04.760673 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Feb 13 19:17:04.760679 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 19:17:04.760685 kernel: x2apic enabled Feb 13 19:17:04.760691 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 19:17:04.760699 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 19:17:04.760705 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 13 19:17:04.760712 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Feb 13 19:17:04.760722 kernel: Disabled fast string operations Feb 13 19:17:04.760730 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 13 19:17:04.760739 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 13 19:17:04.760747 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 19:17:04.760753 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Feb 13 19:17:04.760759 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Feb 13 19:17:04.760767 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Feb 13 19:17:04.760773 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 19:17:04.760779 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Feb 13 19:17:04.760785 kernel: RETBleed: Mitigation: Enhanced IBRS Feb 13 19:17:04.760792 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 19:17:04.760798 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 19:17:04.760804 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 19:17:04.760814 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 19:17:04.760820 kernel: GDS: Unknown: Dependent on hypervisor status Feb 13 19:17:04.760828 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 19:17:04.760834 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 19:17:04.760840 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 19:17:04.760846 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 19:17:04.760852 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Feb 13 19:17:04.760858 kernel: Freeing SMP alternatives memory: 32K Feb 13 19:17:04.760864 kernel: pid_max: default: 131072 minimum: 1024 Feb 13 19:17:04.760870 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 19:17:04.760876 kernel: landlock: Up and running. Feb 13 19:17:04.760883 kernel: SELinux: Initializing. Feb 13 19:17:04.760889 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:17:04.760896 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 19:17:04.760902 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Feb 13 19:17:04.760908 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 19:17:04.760914 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 19:17:04.760920 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Feb 13 19:17:04.760928 kernel: Performance Events: Skylake events, core PMU driver. Feb 13 19:17:04.760937 kernel: core: CPUID marked event: 'cpu cycles' unavailable Feb 13 19:17:04.760943 kernel: core: CPUID marked event: 'instructions' unavailable Feb 13 19:17:04.760949 kernel: core: CPUID marked event: 'bus cycles' unavailable Feb 13 19:17:04.760955 kernel: core: CPUID marked event: 'cache references' unavailable Feb 13 19:17:04.760961 kernel: core: CPUID marked event: 'cache misses' unavailable Feb 13 19:17:04.760967 kernel: core: CPUID marked event: 'branch instructions' unavailable Feb 13 19:17:04.760973 kernel: core: CPUID marked event: 'branch misses' unavailable Feb 13 19:17:04.760979 kernel: ... version: 1 Feb 13 19:17:04.760985 kernel: ... bit width: 48 Feb 13 19:17:04.760992 kernel: ... generic registers: 4 Feb 13 19:17:04.760998 kernel: ... value mask: 0000ffffffffffff Feb 13 19:17:04.761004 kernel: ... max period: 000000007fffffff Feb 13 19:17:04.761010 kernel: ... fixed-purpose events: 0 Feb 13 19:17:04.761017 kernel: ... event mask: 000000000000000f Feb 13 19:17:04.761023 kernel: signal: max sigframe size: 1776 Feb 13 19:17:04.761029 kernel: rcu: Hierarchical SRCU implementation. Feb 13 19:17:04.761035 kernel: rcu: Max phase no-delay instances is 400. Feb 13 19:17:04.761041 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 19:17:04.761048 kernel: smp: Bringing up secondary CPUs ... Feb 13 19:17:04.761054 kernel: smpboot: x86: Booting SMP configuration: Feb 13 19:17:04.761061 kernel: .... node #0, CPUs: #1 Feb 13 19:17:04.761069 kernel: Disabled fast string operations Feb 13 19:17:04.761075 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Feb 13 19:17:04.761081 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 19:17:04.761086 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 19:17:04.761095 kernel: smpboot: Max logical packages: 128 Feb 13 19:17:04.761101 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Feb 13 19:17:04.761107 kernel: devtmpfs: initialized Feb 13 19:17:04.761114 kernel: x86/mm: Memory block size: 128MB Feb 13 19:17:04.761120 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Feb 13 19:17:04.761126 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 19:17:04.761133 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Feb 13 19:17:04.761138 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 19:17:04.761145 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 19:17:04.761151 kernel: audit: initializing netlink subsys (disabled) Feb 13 19:17:04.761157 kernel: audit: type=2000 audit(1739474223.067:1): state=initialized audit_enabled=0 res=1 Feb 13 19:17:04.761164 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 19:17:04.761171 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 19:17:04.761177 kernel: cpuidle: using governor menu Feb 13 19:17:04.761183 kernel: Simple Boot Flag at 0x36 set to 0x80 Feb 13 19:17:04.761189 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 19:17:04.761195 kernel: dca service started, version 1.12.1 Feb 13 19:17:04.761201 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Feb 13 19:17:04.761207 kernel: PCI: Using configuration type 1 for base access Feb 13 19:17:04.761213 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 19:17:04.761219 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 19:17:04.761228 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 19:17:04.761233 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 19:17:04.761239 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 19:17:04.761245 kernel: ACPI: Added _OSI(Module Device) Feb 13 19:17:04.761254 kernel: ACPI: Added _OSI(Processor Device) Feb 13 19:17:04.761280 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 19:17:04.761309 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 19:17:04.761332 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 19:17:04.761338 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Feb 13 19:17:04.761346 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 19:17:04.761352 kernel: ACPI: Interpreter enabled Feb 13 19:17:04.761358 kernel: ACPI: PM: (supports S0 S1 S5) Feb 13 19:17:04.761364 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 19:17:04.761370 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 19:17:04.761376 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 19:17:04.761382 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Feb 13 19:17:04.761388 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Feb 13 19:17:04.761504 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 19:17:04.761566 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Feb 13 19:17:04.761639 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Feb 13 19:17:04.761649 kernel: PCI host bridge to bus 0000:00 Feb 13 19:17:04.761703 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 19:17:04.761750 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Feb 13 19:17:04.761805 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 19:17:04.761881 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 19:17:04.761927 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Feb 13 19:17:04.761972 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Feb 13 19:17:04.762034 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Feb 13 19:17:04.762096 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Feb 13 19:17:04.762155 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Feb 13 19:17:04.762238 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Feb 13 19:17:04.762303 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Feb 13 19:17:04.762371 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 19:17:04.762434 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 19:17:04.762485 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 19:17:04.762536 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 19:17:04.763036 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Feb 13 19:17:04.763105 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Feb 13 19:17:04.763163 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Feb 13 19:17:04.763220 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Feb 13 19:17:04.763282 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Feb 13 19:17:04.763338 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Feb 13 19:17:04.763397 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Feb 13 19:17:04.763452 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Feb 13 19:17:04.763503 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Feb 13 19:17:04.763554 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Feb 13 19:17:04.763618 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Feb 13 19:17:04.763670 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 19:17:04.765712 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Feb 13 19:17:04.765822 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.765896 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.765955 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.766009 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.766066 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.766120 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.766176 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.766232 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.766302 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.766361 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.766423 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.766476 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.766531 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.766961 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.767035 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.767093 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.767153 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.767211 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.767273 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.767339 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.767397 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.767450 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.767532 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.767655 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.767805 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.768675 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.768746 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.768802 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.768860 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.768915 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.768970 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.769027 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.769085 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.769138 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.769193 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.769245 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.769301 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.769366 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.769423 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.769475 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.769530 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.770614 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.770696 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.770758 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.770832 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.770895 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.770956 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.771009 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.771066 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.771119 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.771179 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.771232 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.771288 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.771342 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.771405 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.771459 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.771518 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.771571 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.772489 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.773620 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.773701 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.773759 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.773823 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Feb 13 19:17:04.773878 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.773933 kernel: pci_bus 0000:01: extended config space not accessible Feb 13 19:17:04.773991 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 19:17:04.774044 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 19:17:04.774053 kernel: acpiphp: Slot [32] registered Feb 13 19:17:04.774060 kernel: acpiphp: Slot [33] registered Feb 13 19:17:04.774068 kernel: acpiphp: Slot [34] registered Feb 13 19:17:04.774075 kernel: acpiphp: Slot [35] registered Feb 13 19:17:04.774081 kernel: acpiphp: Slot [36] registered Feb 13 19:17:04.774087 kernel: acpiphp: Slot [37] registered Feb 13 19:17:04.774093 kernel: acpiphp: Slot [38] registered Feb 13 19:17:04.774099 kernel: acpiphp: Slot [39] registered Feb 13 19:17:04.774105 kernel: acpiphp: Slot [40] registered Feb 13 19:17:04.774111 kernel: acpiphp: Slot [41] registered Feb 13 19:17:04.774118 kernel: acpiphp: Slot [42] registered Feb 13 19:17:04.774123 kernel: acpiphp: Slot [43] registered Feb 13 19:17:04.774131 kernel: acpiphp: Slot [44] registered Feb 13 19:17:04.774137 kernel: acpiphp: Slot [45] registered Feb 13 19:17:04.774147 kernel: acpiphp: Slot [46] registered Feb 13 19:17:04.774157 kernel: acpiphp: Slot [47] registered Feb 13 19:17:04.774167 kernel: acpiphp: Slot [48] registered Feb 13 19:17:04.774175 kernel: acpiphp: Slot [49] registered Feb 13 19:17:04.774181 kernel: acpiphp: Slot [50] registered Feb 13 19:17:04.774187 kernel: acpiphp: Slot [51] registered Feb 13 19:17:04.774193 kernel: acpiphp: Slot [52] registered Feb 13 19:17:04.774201 kernel: acpiphp: Slot [53] registered Feb 13 19:17:04.774207 kernel: acpiphp: Slot [54] registered Feb 13 19:17:04.774213 kernel: acpiphp: Slot [55] registered Feb 13 19:17:04.774219 kernel: acpiphp: Slot [56] registered Feb 13 19:17:04.774225 kernel: acpiphp: Slot [57] registered Feb 13 19:17:04.774231 kernel: acpiphp: Slot [58] registered Feb 13 19:17:04.774237 kernel: acpiphp: Slot [59] registered Feb 13 19:17:04.774242 kernel: acpiphp: Slot [60] registered Feb 13 19:17:04.774248 kernel: acpiphp: Slot [61] registered Feb 13 19:17:04.774254 kernel: acpiphp: Slot [62] registered Feb 13 19:17:04.774261 kernel: acpiphp: Slot [63] registered Feb 13 19:17:04.774321 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Feb 13 19:17:04.774374 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 13 19:17:04.774425 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 13 19:17:04.774475 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 19:17:04.774529 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Feb 13 19:17:04.775607 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Feb 13 19:17:04.775670 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Feb 13 19:17:04.775723 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Feb 13 19:17:04.775775 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Feb 13 19:17:04.775838 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Feb 13 19:17:04.775892 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Feb 13 19:17:04.775944 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Feb 13 19:17:04.775997 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 13 19:17:04.776053 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Feb 13 19:17:04.776107 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 13 19:17:04.776171 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 13 19:17:04.776229 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 13 19:17:04.776283 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 13 19:17:04.776338 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 13 19:17:04.776390 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 13 19:17:04.776441 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 13 19:17:04.776497 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 19:17:04.776550 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 13 19:17:04.779697 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 13 19:17:04.779770 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 13 19:17:04.779825 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 19:17:04.779882 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 13 19:17:04.779935 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 13 19:17:04.779987 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 19:17:04.780047 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 13 19:17:04.780205 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 13 19:17:04.780451 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 19:17:04.780513 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 13 19:17:04.780572 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 13 19:17:04.781766 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 19:17:04.781850 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 13 19:17:04.781906 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 13 19:17:04.781972 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 19:17:04.782045 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 13 19:17:04.782119 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 13 19:17:04.782183 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 19:17:04.782252 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Feb 13 19:17:04.782307 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Feb 13 19:17:04.782361 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Feb 13 19:17:04.782421 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Feb 13 19:17:04.782483 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Feb 13 19:17:04.782537 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Feb 13 19:17:04.782608 kernel: pci 0000:0b:00.0: supports D1 D2 Feb 13 19:17:04.782672 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Feb 13 19:17:04.782727 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Feb 13 19:17:04.782782 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 13 19:17:04.782834 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 13 19:17:04.782886 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 13 19:17:04.782940 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 13 19:17:04.782992 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 13 19:17:04.783043 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 13 19:17:04.783098 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 19:17:04.783152 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 13 19:17:04.783204 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 13 19:17:04.783255 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 13 19:17:04.783307 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 19:17:04.783361 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 13 19:17:04.783419 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 13 19:17:04.783472 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 19:17:04.783529 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 13 19:17:04.784608 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 13 19:17:04.784668 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 19:17:04.784723 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 13 19:17:04.784774 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 13 19:17:04.784831 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 19:17:04.784885 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 13 19:17:04.784938 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 13 19:17:04.784992 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 19:17:04.785045 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 13 19:17:04.785097 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 13 19:17:04.785148 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 19:17:04.785202 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 13 19:17:04.785253 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 13 19:17:04.785305 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 13 19:17:04.785357 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 19:17:04.785414 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 13 19:17:04.785466 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 13 19:17:04.785517 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 13 19:17:04.785568 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 19:17:04.787940 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 13 19:17:04.788001 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 13 19:17:04.788053 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 13 19:17:04.788111 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 19:17:04.788166 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 13 19:17:04.788218 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 13 19:17:04.788270 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 19:17:04.788323 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 13 19:17:04.788374 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 13 19:17:04.788425 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 19:17:04.788479 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 13 19:17:04.788534 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 13 19:17:04.788594 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 19:17:04.788648 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 13 19:17:04.788700 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 13 19:17:04.788753 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 19:17:04.788805 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 13 19:17:04.788856 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 13 19:17:04.788907 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 19:17:04.788963 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 13 19:17:04.789016 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 13 19:17:04.789068 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 13 19:17:04.789119 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 19:17:04.789173 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 13 19:17:04.789224 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 13 19:17:04.789276 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 13 19:17:04.789327 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 19:17:04.789384 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 13 19:17:04.789435 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 13 19:17:04.789487 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 19:17:04.789540 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 13 19:17:04.791485 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 13 19:17:04.791553 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 19:17:04.791630 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 13 19:17:04.791684 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 13 19:17:04.791742 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 19:17:04.791797 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 13 19:17:04.791855 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 13 19:17:04.791908 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 19:17:04.791963 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 13 19:17:04.792014 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 13 19:17:04.792066 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 19:17:04.792121 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 13 19:17:04.792175 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 13 19:17:04.792227 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 19:17:04.792235 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Feb 13 19:17:04.792242 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Feb 13 19:17:04.792248 kernel: ACPI: PCI: Interrupt link LNKB disabled Feb 13 19:17:04.792254 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 19:17:04.792260 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Feb 13 19:17:04.792266 kernel: iommu: Default domain type: Translated Feb 13 19:17:04.792274 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 19:17:04.792281 kernel: PCI: Using ACPI for IRQ routing Feb 13 19:17:04.792287 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 19:17:04.792293 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Feb 13 19:17:04.792299 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Feb 13 19:17:04.792350 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Feb 13 19:17:04.792402 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Feb 13 19:17:04.792453 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 19:17:04.792462 kernel: vgaarb: loaded Feb 13 19:17:04.792470 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Feb 13 19:17:04.792476 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Feb 13 19:17:04.792482 kernel: clocksource: Switched to clocksource tsc-early Feb 13 19:17:04.792488 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 19:17:04.792494 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 19:17:04.792500 kernel: pnp: PnP ACPI init Feb 13 19:17:04.792556 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Feb 13 19:17:04.792623 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Feb 13 19:17:04.792674 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Feb 13 19:17:04.792726 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Feb 13 19:17:04.792777 kernel: pnp 00:06: [dma 2] Feb 13 19:17:04.792832 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Feb 13 19:17:04.792881 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Feb 13 19:17:04.792929 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Feb 13 19:17:04.792938 kernel: pnp: PnP ACPI: found 8 devices Feb 13 19:17:04.792946 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 19:17:04.792953 kernel: NET: Registered PF_INET protocol family Feb 13 19:17:04.792959 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 19:17:04.792965 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 19:17:04.792971 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 19:17:04.792977 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 19:17:04.792983 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 19:17:04.792990 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 19:17:04.792996 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:17:04.793003 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 19:17:04.793009 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 19:17:04.793015 kernel: NET: Registered PF_XDP protocol family Feb 13 19:17:04.793068 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Feb 13 19:17:04.793124 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 19:17:04.793179 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 19:17:04.793235 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 19:17:04.793299 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 19:17:04.793357 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Feb 13 19:17:04.793412 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Feb 13 19:17:04.793468 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Feb 13 19:17:04.793531 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Feb 13 19:17:04.793952 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Feb 13 19:17:04.794020 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Feb 13 19:17:04.794077 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Feb 13 19:17:04.794134 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Feb 13 19:17:04.794190 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Feb 13 19:17:04.794245 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Feb 13 19:17:04.794302 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Feb 13 19:17:04.794356 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Feb 13 19:17:04.794411 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Feb 13 19:17:04.794465 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Feb 13 19:17:04.794518 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Feb 13 19:17:04.794573 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Feb 13 19:17:04.794637 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Feb 13 19:17:04.794689 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Feb 13 19:17:04.794743 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 19:17:04.794797 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 19:17:04.794850 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.794902 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.794955 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795009 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.795062 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795114 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.795166 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795218 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.795280 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795336 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.795390 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795446 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.795499 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795552 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.795681 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795735 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.795786 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795848 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.795903 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.795959 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796012 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796064 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796117 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796170 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796222 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796274 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796328 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796383 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796436 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796488 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796542 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796609 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796663 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796717 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796769 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796824 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796876 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.796930 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.796983 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.797035 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.797088 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.797141 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.797193 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.797244 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.797299 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.797350 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.797402 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.797454 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.797506 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.797558 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.797742 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.797796 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.797848 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.797903 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.797954 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798005 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798055 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798107 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798158 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798209 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798260 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798312 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798364 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798419 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798471 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798522 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798574 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798666 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798717 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798768 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798827 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798886 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.798943 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.798994 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.799046 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.799099 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.799151 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.799204 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.799256 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.799309 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.799362 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.799414 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.799469 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Feb 13 19:17:04.799522 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Feb 13 19:17:04.799575 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Feb 13 19:17:04.799638 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Feb 13 19:17:04.799690 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Feb 13 19:17:04.799741 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Feb 13 19:17:04.799792 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 19:17:04.799849 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Feb 13 19:17:04.799905 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Feb 13 19:17:04.799958 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Feb 13 19:17:04.800010 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Feb 13 19:17:04.800062 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 19:17:04.800115 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Feb 13 19:17:04.800167 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Feb 13 19:17:04.800219 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Feb 13 19:17:04.800271 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 19:17:04.800325 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Feb 13 19:17:04.800379 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Feb 13 19:17:04.800431 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Feb 13 19:17:04.800483 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 19:17:04.800535 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Feb 13 19:17:04.800597 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Feb 13 19:17:04.800662 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 19:17:04.800715 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Feb 13 19:17:04.800767 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Feb 13 19:17:04.800820 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 19:17:04.800877 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Feb 13 19:17:04.800929 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Feb 13 19:17:04.800981 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 19:17:04.801035 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Feb 13 19:17:04.801087 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Feb 13 19:17:04.801138 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 19:17:04.801193 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Feb 13 19:17:04.801245 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Feb 13 19:17:04.801297 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 19:17:04.801354 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Feb 13 19:17:04.801407 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Feb 13 19:17:04.801459 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Feb 13 19:17:04.801522 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Feb 13 19:17:04.801574 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 19:17:04.801650 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Feb 13 19:17:04.801706 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Feb 13 19:17:04.801759 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Feb 13 19:17:04.801825 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 19:17:04.801884 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Feb 13 19:17:04.801937 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Feb 13 19:17:04.801989 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Feb 13 19:17:04.802041 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 19:17:04.802094 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Feb 13 19:17:04.802146 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Feb 13 19:17:04.802211 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 19:17:04.802269 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Feb 13 19:17:04.802322 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Feb 13 19:17:04.802376 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 19:17:04.802429 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Feb 13 19:17:04.802482 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Feb 13 19:17:04.802534 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 19:17:04.802596 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Feb 13 19:17:04.802650 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Feb 13 19:17:04.802703 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 19:17:04.802759 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Feb 13 19:17:04.802814 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Feb 13 19:17:04.802867 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 19:17:04.802941 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Feb 13 19:17:04.802995 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Feb 13 19:17:04.803047 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Feb 13 19:17:04.803099 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 19:17:04.803153 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Feb 13 19:17:04.803207 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Feb 13 19:17:04.803259 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Feb 13 19:17:04.803315 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 19:17:04.803370 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Feb 13 19:17:04.803423 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Feb 13 19:17:04.803474 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Feb 13 19:17:04.803527 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 19:17:04.803822 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Feb 13 19:17:04.803887 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Feb 13 19:17:04.803939 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 19:17:04.803993 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Feb 13 19:17:04.804056 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Feb 13 19:17:04.804108 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 19:17:04.804167 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Feb 13 19:17:04.804221 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Feb 13 19:17:04.804276 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 19:17:04.804330 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Feb 13 19:17:04.804384 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Feb 13 19:17:04.804436 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 19:17:04.804490 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Feb 13 19:17:04.804541 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Feb 13 19:17:04.804618 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 19:17:04.804674 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Feb 13 19:17:04.804741 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Feb 13 19:17:04.804806 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Feb 13 19:17:04.804866 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 19:17:04.804921 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Feb 13 19:17:04.806587 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Feb 13 19:17:04.806682 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Feb 13 19:17:04.806739 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 19:17:04.806800 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Feb 13 19:17:04.806861 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Feb 13 19:17:04.806914 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 19:17:04.806970 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Feb 13 19:17:04.807028 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Feb 13 19:17:04.807081 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 19:17:04.807153 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Feb 13 19:17:04.807206 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Feb 13 19:17:04.807262 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 19:17:04.807318 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Feb 13 19:17:04.807392 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Feb 13 19:17:04.807448 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 19:17:04.807519 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Feb 13 19:17:04.807618 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Feb 13 19:17:04.807674 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 19:17:04.807728 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Feb 13 19:17:04.807781 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Feb 13 19:17:04.807839 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 19:17:04.807893 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Feb 13 19:17:04.807943 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Feb 13 19:17:04.807990 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Feb 13 19:17:04.808036 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Feb 13 19:17:04.808081 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Feb 13 19:17:04.808142 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Feb 13 19:17:04.808194 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Feb 13 19:17:04.808242 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Feb 13 19:17:04.808293 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Feb 13 19:17:04.808342 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Feb 13 19:17:04.808390 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Feb 13 19:17:04.808437 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Feb 13 19:17:04.808492 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Feb 13 19:17:04.808547 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Feb 13 19:17:04.808714 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Feb 13 19:17:04.808800 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Feb 13 19:17:04.808891 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Feb 13 19:17:04.808953 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Feb 13 19:17:04.809001 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Feb 13 19:17:04.809069 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Feb 13 19:17:04.809118 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Feb 13 19:17:04.809165 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Feb 13 19:17:04.809216 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Feb 13 19:17:04.809268 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Feb 13 19:17:04.809325 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Feb 13 19:17:04.809372 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Feb 13 19:17:04.809424 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Feb 13 19:17:04.809471 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Feb 13 19:17:04.809543 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Feb 13 19:17:04.810095 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Feb 13 19:17:04.810164 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Feb 13 19:17:04.810230 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Feb 13 19:17:04.810286 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Feb 13 19:17:04.810335 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Feb 13 19:17:04.810387 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Feb 13 19:17:04.810441 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Feb 13 19:17:04.810488 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Feb 13 19:17:04.810536 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Feb 13 19:17:04.810599 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Feb 13 19:17:04.810657 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Feb 13 19:17:04.810704 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Feb 13 19:17:04.810757 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Feb 13 19:17:04.810806 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Feb 13 19:17:04.810875 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Feb 13 19:17:04.810923 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Feb 13 19:17:04.810975 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Feb 13 19:17:04.811024 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Feb 13 19:17:04.811091 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Feb 13 19:17:04.811138 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Feb 13 19:17:04.811190 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Feb 13 19:17:04.811237 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Feb 13 19:17:04.812451 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Feb 13 19:17:04.812570 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Feb 13 19:17:04.812640 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Feb 13 19:17:04.812699 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Feb 13 19:17:04.812748 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Feb 13 19:17:04.812798 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Feb 13 19:17:04.812850 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Feb 13 19:17:04.812898 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Feb 13 19:17:04.812946 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Feb 13 19:17:04.812999 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Feb 13 19:17:04.813059 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Feb 13 19:17:04.813133 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Feb 13 19:17:04.813195 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Feb 13 19:17:04.813251 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Feb 13 19:17:04.813300 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Feb 13 19:17:04.813354 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Feb 13 19:17:04.813405 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Feb 13 19:17:04.813458 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Feb 13 19:17:04.813524 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Feb 13 19:17:04.814675 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Feb 13 19:17:04.814746 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Feb 13 19:17:04.814805 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Feb 13 19:17:04.814864 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Feb 13 19:17:04.814912 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Feb 13 19:17:04.814961 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Feb 13 19:17:04.815014 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Feb 13 19:17:04.815062 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Feb 13 19:17:04.815115 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Feb 13 19:17:04.815176 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Feb 13 19:17:04.815240 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Feb 13 19:17:04.815287 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Feb 13 19:17:04.815341 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Feb 13 19:17:04.815392 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Feb 13 19:17:04.815445 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Feb 13 19:17:04.815497 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Feb 13 19:17:04.815549 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Feb 13 19:17:04.816054 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Feb 13 19:17:04.816119 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 19:17:04.816130 kernel: PCI: CLS 32 bytes, default 64 Feb 13 19:17:04.816137 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 19:17:04.816149 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Feb 13 19:17:04.816163 kernel: clocksource: Switched to clocksource tsc Feb 13 19:17:04.816170 kernel: Initialise system trusted keyrings Feb 13 19:17:04.816176 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 19:17:04.816183 kernel: Key type asymmetric registered Feb 13 19:17:04.816189 kernel: Asymmetric key parser 'x509' registered Feb 13 19:17:04.816195 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 19:17:04.816202 kernel: io scheduler mq-deadline registered Feb 13 19:17:04.816209 kernel: io scheduler kyber registered Feb 13 19:17:04.816215 kernel: io scheduler bfq registered Feb 13 19:17:04.816288 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Feb 13 19:17:04.816359 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.816433 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Feb 13 19:17:04.816506 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.816570 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Feb 13 19:17:04.817692 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.817753 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Feb 13 19:17:04.817813 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.818575 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Feb 13 19:17:04.818693 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.818754 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Feb 13 19:17:04.818827 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.818888 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Feb 13 19:17:04.818949 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819005 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Feb 13 19:17:04.819061 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819114 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Feb 13 19:17:04.819167 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819220 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Feb 13 19:17:04.819281 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819335 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Feb 13 19:17:04.819388 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819443 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Feb 13 19:17:04.819500 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819553 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Feb 13 19:17:04.819625 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819688 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Feb 13 19:17:04.819744 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819813 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Feb 13 19:17:04.819878 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.819944 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Feb 13 19:17:04.820005 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.820065 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Feb 13 19:17:04.820119 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.820184 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Feb 13 19:17:04.820239 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.820293 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Feb 13 19:17:04.820349 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.820402 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Feb 13 19:17:04.820454 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.820520 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Feb 13 19:17:04.821224 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.821305 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Feb 13 19:17:04.821369 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.821426 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Feb 13 19:17:04.821481 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.821536 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Feb 13 19:17:04.821659 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.821721 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Feb 13 19:17:04.821794 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.821857 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Feb 13 19:17:04.821910 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.821964 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Feb 13 19:17:04.822017 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.822075 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Feb 13 19:17:04.822129 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.822192 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Feb 13 19:17:04.822262 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.822336 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Feb 13 19:17:04.822389 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.822445 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Feb 13 19:17:04.822498 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.822553 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Feb 13 19:17:04.822617 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Feb 13 19:17:04.822626 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 19:17:04.822635 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 19:17:04.822642 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 19:17:04.822648 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Feb 13 19:17:04.822655 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 19:17:04.822661 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 19:17:04.822714 kernel: rtc_cmos 00:01: registered as rtc0 Feb 13 19:17:04.822766 kernel: rtc_cmos 00:01: setting system clock to 2025-02-13T19:17:04 UTC (1739474224) Feb 13 19:17:04.822813 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Feb 13 19:17:04.822824 kernel: intel_pstate: CPU model not supported Feb 13 19:17:04.822830 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 19:17:04.822837 kernel: NET: Registered PF_INET6 protocol family Feb 13 19:17:04.822843 kernel: Segment Routing with IPv6 Feb 13 19:17:04.822850 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 19:17:04.822856 kernel: NET: Registered PF_PACKET protocol family Feb 13 19:17:04.822862 kernel: Key type dns_resolver registered Feb 13 19:17:04.822869 kernel: IPI shorthand broadcast: enabled Feb 13 19:17:04.822875 kernel: sched_clock: Marking stable (942234160, 224678914)->(1227459765, -60546691) Feb 13 19:17:04.822883 kernel: registered taskstats version 1 Feb 13 19:17:04.822889 kernel: Loading compiled-in X.509 certificates Feb 13 19:17:04.822897 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 19:17:04.822903 kernel: Key type .fscrypt registered Feb 13 19:17:04.822910 kernel: Key type fscrypt-provisioning registered Feb 13 19:17:04.822916 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 19:17:04.822922 kernel: ima: Allocated hash algorithm: sha1 Feb 13 19:17:04.822929 kernel: ima: No architecture policies found Feb 13 19:17:04.822936 kernel: clk: Disabling unused clocks Feb 13 19:17:04.822943 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 19:17:04.822949 kernel: Write protecting the kernel read-only data: 38912k Feb 13 19:17:04.822956 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 19:17:04.822962 kernel: Run /init as init process Feb 13 19:17:04.822968 kernel: with arguments: Feb 13 19:17:04.822974 kernel: /init Feb 13 19:17:04.822980 kernel: with environment: Feb 13 19:17:04.822986 kernel: HOME=/ Feb 13 19:17:04.822992 kernel: TERM=linux Feb 13 19:17:04.823000 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 19:17:04.823007 systemd[1]: Successfully made /usr/ read-only. Feb 13 19:17:04.823016 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:17:04.823023 systemd[1]: Detected virtualization vmware. Feb 13 19:17:04.823029 systemd[1]: Detected architecture x86-64. Feb 13 19:17:04.823036 systemd[1]: Running in initrd. Feb 13 19:17:04.823042 systemd[1]: No hostname configured, using default hostname. Feb 13 19:17:04.823050 systemd[1]: Hostname set to . Feb 13 19:17:04.823056 systemd[1]: Initializing machine ID from random generator. Feb 13 19:17:04.823062 systemd[1]: Queued start job for default target initrd.target. Feb 13 19:17:04.823069 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:04.823075 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:04.823083 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 19:17:04.823089 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:17:04.823096 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 19:17:04.823104 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 19:17:04.823111 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 19:17:04.823118 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 19:17:04.823125 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:04.823131 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:04.823138 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:17:04.823145 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:17:04.823154 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:17:04.823162 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:17:04.823171 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:17:04.823178 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:17:04.823185 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 19:17:04.823191 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 19:17:04.823198 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:04.823204 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:04.823211 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:04.823219 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:17:04.823226 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 19:17:04.823232 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:17:04.823239 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 19:17:04.823245 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 19:17:04.823252 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:17:04.823258 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:17:04.823265 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:04.823286 systemd-journald[216]: Collecting audit messages is disabled. Feb 13 19:17:04.823304 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 19:17:04.823312 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:04.823320 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 19:17:04.823327 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 19:17:04.823334 kernel: Bridge firewalling registered Feb 13 19:17:04.823342 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:17:04.823349 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:04.823356 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:04.823364 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:04.823371 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:04.823377 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:17:04.823384 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:17:04.823391 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:04.823398 systemd-journald[216]: Journal started Feb 13 19:17:04.823415 systemd-journald[216]: Runtime Journal (/run/log/journal/f95acbabbd2e49018e2a8731a11288ff) is 4.8M, max 38.6M, 33.8M free. Feb 13 19:17:04.761476 systemd-modules-load[217]: Inserted module 'overlay' Feb 13 19:17:04.825269 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:17:04.782106 systemd-modules-load[217]: Inserted module 'br_netfilter' Feb 13 19:17:04.824802 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:04.832712 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:17:04.833001 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:04.835505 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 19:17:04.839915 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:04.842220 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:17:04.846808 dracut-cmdline[250]: dracut-dracut-053 Feb 13 19:17:04.849302 dracut-cmdline[250]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 19:17:04.864777 systemd-resolved[252]: Positive Trust Anchors: Feb 13 19:17:04.864785 systemd-resolved[252]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:17:04.864808 systemd-resolved[252]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:17:04.867649 systemd-resolved[252]: Defaulting to hostname 'linux'. Feb 13 19:17:04.868462 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:17:04.868943 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:04.893607 kernel: SCSI subsystem initialized Feb 13 19:17:04.899595 kernel: Loading iSCSI transport class v2.0-870. Feb 13 19:17:04.906594 kernel: iscsi: registered transport (tcp) Feb 13 19:17:04.919904 kernel: iscsi: registered transport (qla4xxx) Feb 13 19:17:04.919992 kernel: QLogic iSCSI HBA Driver Feb 13 19:17:04.945883 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 19:17:04.949713 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 19:17:04.968673 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 19:17:04.968733 kernel: device-mapper: uevent: version 1.0.3 Feb 13 19:17:04.968747 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 19:17:05.003613 kernel: raid6: avx2x4 gen() 36536 MB/s Feb 13 19:17:05.019609 kernel: raid6: avx2x2 gen() 47668 MB/s Feb 13 19:17:05.036791 kernel: raid6: avx2x1 gen() 42908 MB/s Feb 13 19:17:05.036844 kernel: raid6: using algorithm avx2x2 gen() 47668 MB/s Feb 13 19:17:05.054805 kernel: raid6: .... xor() 31209 MB/s, rmw enabled Feb 13 19:17:05.054866 kernel: raid6: using avx2x2 recovery algorithm Feb 13 19:17:05.068599 kernel: xor: automatically using best checksumming function avx Feb 13 19:17:05.157593 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 19:17:05.162763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:17:05.167724 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:05.175876 systemd-udevd[435]: Using default interface naming scheme 'v255'. Feb 13 19:17:05.178759 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:05.185668 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 19:17:05.192275 dracut-pre-trigger[440]: rd.md=0: removing MD RAID activation Feb 13 19:17:05.208217 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:17:05.212756 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:17:05.289060 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:05.294787 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 19:17:05.310780 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 19:17:05.312326 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:17:05.312545 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:05.312799 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:17:05.317547 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 19:17:05.330467 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:17:05.379595 kernel: VMware PVSCSI driver - version 1.0.7.0-k Feb 13 19:17:05.382008 kernel: vmw_pvscsi: using 64bit dma Feb 13 19:17:05.382045 kernel: vmw_pvscsi: max_id: 16 Feb 13 19:17:05.382055 kernel: vmw_pvscsi: setting ring_pages to 8 Feb 13 19:17:05.385747 kernel: vmw_pvscsi: enabling reqCallThreshold Feb 13 19:17:05.385811 kernel: vmw_pvscsi: driver-based request coalescing enabled Feb 13 19:17:05.385821 kernel: vmw_pvscsi: using MSI-X Feb 13 19:17:05.385829 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Feb 13 19:17:05.387303 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Feb 13 19:17:05.390161 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Feb 13 19:17:05.404711 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Feb 13 19:17:05.409598 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Feb 13 19:17:05.414167 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Feb 13 19:17:05.414262 kernel: libata version 3.00 loaded. Feb 13 19:17:05.421699 kernel: ata_piix 0000:00:07.1: version 2.13 Feb 13 19:17:05.436259 kernel: scsi host1: ata_piix Feb 13 19:17:05.436347 kernel: scsi host2: ata_piix Feb 13 19:17:05.436415 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Feb 13 19:17:05.436425 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Feb 13 19:17:05.436433 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Feb 13 19:17:05.436511 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 19:17:05.439648 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:17:05.439739 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:05.440113 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:05.440235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:17:05.440309 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:05.440548 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:05.445771 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:05.459488 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:05.464701 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 19:17:05.473643 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:05.595595 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Feb 13 19:17:05.601595 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Feb 13 19:17:05.615410 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 19:17:05.615448 kernel: AES CTR mode by8 optimization enabled Feb 13 19:17:05.619125 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Feb 13 19:17:05.667391 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 13 19:17:05.667486 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Feb 13 19:17:05.667556 kernel: sd 0:0:0:0: [sda] Cache data unavailable Feb 13 19:17:05.667638 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Feb 13 19:17:05.667704 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Feb 13 19:17:05.667782 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 13 19:17:05.667796 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Feb 13 19:17:05.667875 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:17:05.667884 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 13 19:17:05.720463 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (494) Feb 13 19:17:05.722209 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Feb 13 19:17:05.734574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Feb 13 19:17:05.740608 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (486) Feb 13 19:17:05.744174 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Feb 13 19:17:05.750700 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Feb 13 19:17:05.750872 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Feb 13 19:17:05.761738 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 19:17:05.822298 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:17:06.850460 disk-uuid[595]: The operation has completed successfully. Feb 13 19:17:06.850733 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 13 19:17:06.901166 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 19:17:06.901246 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 19:17:06.910682 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 19:17:06.912559 sh[611]: Success Feb 13 19:17:06.920589 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 19:17:06.965565 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 19:17:06.974757 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 19:17:06.975718 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 19:17:06.990645 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 19:17:06.990686 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:17:06.990696 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 19:17:06.993104 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 19:17:06.993121 kernel: BTRFS info (device dm-0): using free space tree Feb 13 19:17:06.999595 kernel: BTRFS info (device dm-0): enabling ssd optimizations Feb 13 19:17:07.001250 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 19:17:07.011731 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Feb 13 19:17:07.013620 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 19:17:07.029657 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:17:07.029693 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:17:07.031168 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:17:07.039732 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:17:07.043929 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 19:17:07.045590 kernel: BTRFS info (device sda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:17:07.047630 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 19:17:07.052710 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 19:17:07.096400 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Feb 13 19:17:07.104213 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 19:17:07.136475 ignition[672]: Ignition 2.20.0 Feb 13 19:17:07.136482 ignition[672]: Stage: fetch-offline Feb 13 19:17:07.136501 ignition[672]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:07.136505 ignition[672]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 19:17:07.136552 ignition[672]: parsed url from cmdline: "" Feb 13 19:17:07.136554 ignition[672]: no config URL provided Feb 13 19:17:07.136557 ignition[672]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 19:17:07.136561 ignition[672]: no config at "/usr/lib/ignition/user.ign" Feb 13 19:17:07.136943 ignition[672]: config successfully fetched Feb 13 19:17:07.136952 ignition[672]: parsing config with SHA512: 114ae0cb971f25d843b7a38f6c63ed3ad88c5d1ab97fd9c1073cd2f340e0609aecfa2330dab25826918f79b97cf86220e02f14bfd553362cdecf717f857728e3 Feb 13 19:17:07.139162 unknown[672]: fetched base config from "system" Feb 13 19:17:07.139171 unknown[672]: fetched user config from "vmware" Feb 13 19:17:07.139336 ignition[672]: fetch-offline: fetch-offline passed Feb 13 19:17:07.139379 ignition[672]: Ignition finished successfully Feb 13 19:17:07.140333 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:17:07.165285 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:17:07.169654 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:17:07.183299 systemd-networkd[807]: lo: Link UP Feb 13 19:17:07.183304 systemd-networkd[807]: lo: Gained carrier Feb 13 19:17:07.184387 systemd-networkd[807]: Enumeration completed Feb 13 19:17:07.184556 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:17:07.184643 systemd-networkd[807]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Feb 13 19:17:07.185176 systemd[1]: Reached target network.target - Network. Feb 13 19:17:07.185264 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Feb 13 19:17:07.186923 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 13 19:17:07.188465 systemd-networkd[807]: ens192: Link UP Feb 13 19:17:07.188839 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 13 19:17:07.188471 systemd-networkd[807]: ens192: Gained carrier Feb 13 19:17:07.189712 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 19:17:07.198546 ignition[810]: Ignition 2.20.0 Feb 13 19:17:07.198552 ignition[810]: Stage: kargs Feb 13 19:17:07.198699 ignition[810]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:07.198706 ignition[810]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 19:17:07.199107 ignition[810]: kargs: kargs passed Feb 13 19:17:07.199128 ignition[810]: Ignition finished successfully Feb 13 19:17:07.200179 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 19:17:07.204704 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 19:17:07.212138 ignition[818]: Ignition 2.20.0 Feb 13 19:17:07.212376 ignition[818]: Stage: disks Feb 13 19:17:07.212497 ignition[818]: no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:07.212504 ignition[818]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 19:17:07.213437 ignition[818]: disks: disks passed Feb 13 19:17:07.213551 ignition[818]: Ignition finished successfully Feb 13 19:17:07.214143 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 19:17:07.214688 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 19:17:07.214916 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 19:17:07.215145 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:17:07.215346 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:17:07.215551 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:17:07.218653 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 19:17:07.229044 systemd-fsck[827]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 19:17:07.230036 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 19:17:07.308522 systemd-resolved[252]: Detected conflict on linux IN A 139.178.70.110 Feb 13 19:17:07.308531 systemd-resolved[252]: Hostname conflict, changing published hostname from 'linux' to 'linux6'. Feb 13 19:17:07.993676 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 19:17:08.083600 kernel: EXT4-fs (sda9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 19:17:08.084299 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 19:17:08.084717 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 19:17:08.107655 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:17:08.110423 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 19:17:08.110939 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 19:17:08.111136 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 19:17:08.111153 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:17:08.114448 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 19:17:08.115316 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 19:17:08.121595 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (835) Feb 13 19:17:08.130829 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:17:08.130869 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:17:08.130877 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:17:08.139595 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:17:08.140866 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:17:08.180639 initrd-setup-root[859]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 19:17:08.184171 initrd-setup-root[866]: cut: /sysroot/etc/group: No such file or directory Feb 13 19:17:08.186481 initrd-setup-root[873]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 19:17:08.188393 initrd-setup-root[880]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 19:17:08.274851 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 19:17:08.283699 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 19:17:08.286176 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 19:17:08.291593 kernel: BTRFS info (device sda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:17:08.306429 ignition[947]: INFO : Ignition 2.20.0 Feb 13 19:17:08.306429 ignition[947]: INFO : Stage: mount Feb 13 19:17:08.306900 ignition[947]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:08.306900 ignition[947]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 19:17:08.307337 ignition[947]: INFO : mount: mount passed Feb 13 19:17:08.307431 ignition[947]: INFO : Ignition finished successfully Feb 13 19:17:08.308314 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 19:17:08.311677 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 19:17:08.312089 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 19:17:08.430686 systemd-networkd[807]: ens192: Gained IPv6LL Feb 13 19:17:08.989139 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 19:17:08.993710 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 19:17:09.074599 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (960) Feb 13 19:17:09.088243 kernel: BTRFS info (device sda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 19:17:09.088292 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 19:17:09.088301 kernel: BTRFS info (device sda6): using free space tree Feb 13 19:17:09.133604 kernel: BTRFS info (device sda6): enabling ssd optimizations Feb 13 19:17:09.139884 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 19:17:09.155782 ignition[977]: INFO : Ignition 2.20.0 Feb 13 19:17:09.155782 ignition[977]: INFO : Stage: files Feb 13 19:17:09.156290 ignition[977]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:09.156290 ignition[977]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 19:17:09.156610 ignition[977]: DEBUG : files: compiled without relabeling support, skipping Feb 13 19:17:09.165566 ignition[977]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 19:17:09.165566 ignition[977]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 19:17:09.200134 ignition[977]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 19:17:09.200349 ignition[977]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 19:17:09.200509 ignition[977]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 19:17:09.200370 unknown[977]: wrote ssh authorized keys file for user: core Feb 13 19:17:09.218957 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 19:17:09.219190 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 19:17:09.219190 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:17:09.219524 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 19:17:09.219524 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:17:09.219524 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:17:09.219524 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:17:09.219524 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 19:17:09.637293 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 19:17:09.876450 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 19:17:09.876775 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 13 19:17:09.876775 ignition[977]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Feb 13 19:17:09.876775 ignition[977]: INFO : files: op(8): [started] processing unit "coreos-metadata.service" Feb 13 19:17:09.887401 ignition[977]: INFO : files: op(8): op(9): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:17:09.887672 ignition[977]: INFO : files: op(8): op(9): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Feb 13 19:17:09.887672 ignition[977]: INFO : files: op(8): [finished] processing unit "coreos-metadata.service" Feb 13 19:17:09.887672 ignition[977]: INFO : files: op(a): [started] setting preset to disabled for "coreos-metadata.service" Feb 13 19:17:10.499882 ignition[977]: INFO : files: op(a): op(b): [started] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:17:10.502240 ignition[977]: INFO : files: op(a): op(b): [finished] removing enablement symlink(s) for "coreos-metadata.service" Feb 13 19:17:10.502240 ignition[977]: INFO : files: op(a): [finished] setting preset to disabled for "coreos-metadata.service" Feb 13 19:17:10.502240 ignition[977]: INFO : files: createResultFile: createFiles: op(c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:17:10.502240 ignition[977]: INFO : files: createResultFile: createFiles: op(c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 19:17:10.502240 ignition[977]: INFO : files: files passed Feb 13 19:17:10.502240 ignition[977]: INFO : Ignition finished successfully Feb 13 19:17:10.503145 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 19:17:10.506724 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 19:17:10.507917 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 19:17:10.518347 initrd-setup-root-after-ignition[1006]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:10.518347 initrd-setup-root-after-ignition[1006]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:10.519386 initrd-setup-root-after-ignition[1010]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 19:17:10.520357 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:17:10.520599 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 19:17:10.523683 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 19:17:10.523930 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 19:17:10.523979 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 19:17:10.536029 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 19:17:10.536115 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 19:17:10.536378 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 19:17:10.536500 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 19:17:10.536704 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 19:17:10.537179 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 19:17:10.553013 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:17:10.556721 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 19:17:10.562745 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:10.563081 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:10.563252 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 19:17:10.563405 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 19:17:10.563492 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 19:17:10.563839 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 19:17:10.564069 systemd[1]: Stopped target basic.target - Basic System. Feb 13 19:17:10.564249 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 19:17:10.564456 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 19:17:10.564653 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 19:17:10.564839 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 19:17:10.565036 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 19:17:10.565253 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 19:17:10.565448 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 19:17:10.565668 systemd[1]: Stopped target swap.target - Swaps. Feb 13 19:17:10.565814 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 19:17:10.565888 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 19:17:10.566146 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:10.566358 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:10.566529 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 19:17:10.566575 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:10.566735 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 19:17:10.566796 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 19:17:10.567069 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 19:17:10.567132 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 19:17:10.567343 systemd[1]: Stopped target paths.target - Path Units. Feb 13 19:17:10.567673 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 19:17:10.567728 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:10.567914 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 19:17:10.568140 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 19:17:10.568321 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 19:17:10.568379 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 19:17:10.568540 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 19:17:10.568598 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 19:17:10.568789 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 19:17:10.568856 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 19:17:10.569131 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 19:17:10.569193 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 19:17:10.577781 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 19:17:10.579758 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 19:17:10.579902 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 19:17:10.579984 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:10.580152 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 19:17:10.580217 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 19:17:10.584320 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 19:17:10.584640 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 19:17:10.585921 ignition[1031]: INFO : Ignition 2.20.0 Feb 13 19:17:10.585921 ignition[1031]: INFO : Stage: umount Feb 13 19:17:10.592634 ignition[1031]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 19:17:10.592634 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Feb 13 19:17:10.592634 ignition[1031]: INFO : umount: umount passed Feb 13 19:17:10.592634 ignition[1031]: INFO : Ignition finished successfully Feb 13 19:17:10.592175 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 19:17:10.592247 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 19:17:10.592683 systemd[1]: Stopped target network.target - Network. Feb 13 19:17:10.594050 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 19:17:10.594095 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 19:17:10.594207 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 19:17:10.594230 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 19:17:10.594324 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 19:17:10.594347 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 19:17:10.594438 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 19:17:10.594461 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 19:17:10.594637 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 19:17:10.594759 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 19:17:10.599768 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 19:17:10.599837 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 19:17:10.601762 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 19:17:10.601993 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 19:17:10.602027 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:10.602991 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 19:17:10.606608 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 19:17:10.606673 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 19:17:10.607808 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 19:17:10.607922 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 19:17:10.607941 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:10.610708 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 19:17:10.610969 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 19:17:10.611006 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 19:17:10.611528 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Feb 13 19:17:10.611554 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Feb 13 19:17:10.612561 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 19:17:10.612603 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:10.612818 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 19:17:10.612845 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:10.612989 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:10.614277 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 19:17:10.614797 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 19:17:10.620274 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 19:17:10.620335 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 19:17:10.624823 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 19:17:10.625014 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:10.625525 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 19:17:10.625554 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:10.625683 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 19:17:10.625701 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:10.625806 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 19:17:10.625830 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 19:17:10.625993 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 19:17:10.626017 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 19:17:10.626151 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 19:17:10.626173 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 19:17:10.631684 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 19:17:10.631789 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 19:17:10.631826 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:10.632005 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 19:17:10.632029 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:17:10.632151 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 19:17:10.632173 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:10.632289 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 19:17:10.632311 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:10.634464 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 19:17:10.634516 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 19:17:10.720058 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 19:17:10.720123 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 19:17:10.720424 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 19:17:10.720537 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 19:17:10.720566 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 19:17:10.724700 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 19:17:10.729797 systemd[1]: Switching root. Feb 13 19:17:10.765372 systemd-journald[216]: Journal stopped Feb 13 19:17:13.303783 systemd-journald[216]: Received SIGTERM from PID 1 (systemd). Feb 13 19:17:13.303808 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 19:17:13.303817 kernel: SELinux: policy capability open_perms=1 Feb 13 19:17:13.303823 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 19:17:13.303828 kernel: SELinux: policy capability always_check_network=0 Feb 13 19:17:13.303834 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 19:17:13.303843 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 19:17:13.303849 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 19:17:13.303855 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 19:17:13.303861 kernel: audit: type=1403 audit(1739474231.499:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 19:17:13.303868 systemd[1]: Successfully loaded SELinux policy in 76.536ms. Feb 13 19:17:13.303875 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 7.266ms. Feb 13 19:17:13.303883 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 19:17:13.303892 systemd[1]: Detected virtualization vmware. Feb 13 19:17:13.303899 systemd[1]: Detected architecture x86-64. Feb 13 19:17:13.303906 systemd[1]: Detected first boot. Feb 13 19:17:13.303913 systemd[1]: Initializing machine ID from random generator. Feb 13 19:17:13.303922 zram_generator::config[1075]: No configuration found. Feb 13 19:17:13.304023 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Feb 13 19:17:13.304036 kernel: Guest personality initialized and is active Feb 13 19:17:13.304042 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 19:17:13.304049 kernel: Initialized host personality Feb 13 19:17:13.304055 kernel: NET: Registered PF_VSOCK protocol family Feb 13 19:17:13.304062 systemd[1]: Populated /etc with preset unit settings. Feb 13 19:17:13.304073 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 19:17:13.304081 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Feb 13 19:17:13.304088 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 19:17:13.304095 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 19:17:13.304102 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 19:17:13.304109 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 19:17:13.304118 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 19:17:13.304125 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 19:17:13.304133 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 19:17:13.304140 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 19:17:13.304147 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 19:17:13.304154 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 19:17:13.304162 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 19:17:13.304170 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 19:17:13.304179 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 19:17:13.304187 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 19:17:13.304196 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 19:17:13.304203 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 19:17:13.304211 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 19:17:13.304218 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 19:17:13.304226 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 19:17:13.304233 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 19:17:13.304242 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 19:17:13.304249 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 19:17:13.304256 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 19:17:13.304264 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 19:17:13.304271 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 19:17:13.304278 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 19:17:13.304286 systemd[1]: Reached target slices.target - Slice Units. Feb 13 19:17:13.304293 systemd[1]: Reached target swap.target - Swaps. Feb 13 19:17:13.304302 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 19:17:13.304309 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 19:17:13.304317 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 19:17:13.304324 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 19:17:13.304332 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 19:17:13.304342 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 19:17:13.304349 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 19:17:13.304356 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 19:17:13.304364 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 19:17:13.304371 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 19:17:13.304379 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:17:13.304386 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 19:17:13.304394 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 19:17:13.304402 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 19:17:13.304410 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 19:17:13.304418 systemd[1]: Reached target machines.target - Containers. Feb 13 19:17:13.304426 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 19:17:13.304433 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Feb 13 19:17:13.304440 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 19:17:13.304448 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 19:17:13.304456 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:13.304465 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:17:13.304472 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:13.304480 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 19:17:13.304487 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:13.304496 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 19:17:13.304503 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 19:17:13.304511 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 19:17:13.304519 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 19:17:13.304526 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 19:17:13.304537 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:13.304544 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 19:17:13.304552 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 19:17:13.304559 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 19:17:13.304567 kernel: fuse: init (API version 7.39) Feb 13 19:17:13.304574 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 19:17:13.309851 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 19:17:13.309866 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 19:17:13.309877 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 19:17:13.309885 systemd[1]: Stopped verity-setup.service. Feb 13 19:17:13.309912 systemd-journald[1158]: Collecting audit messages is disabled. Feb 13 19:17:13.309931 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:17:13.309941 kernel: loop: module loaded Feb 13 19:17:13.309948 kernel: ACPI: bus type drm_connector registered Feb 13 19:17:13.309957 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 19:17:13.309964 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 19:17:13.309972 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 19:17:13.309979 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 19:17:13.309986 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 19:17:13.309994 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 19:17:13.310001 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 19:17:13.310011 systemd-journald[1158]: Journal started Feb 13 19:17:13.310027 systemd-journald[1158]: Runtime Journal (/run/log/journal/1c35b49170fb45dd83e0254d38b05cbf) is 4.8M, max 38.6M, 33.8M free. Feb 13 19:17:13.064449 systemd[1]: Queued start job for default target multi-user.target. Feb 13 19:17:13.074004 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Feb 13 19:17:13.074263 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 19:17:13.310671 jq[1145]: true Feb 13 19:17:13.310950 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 19:17:13.311543 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 19:17:13.312716 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 19:17:13.313048 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:13.313202 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:13.313534 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:17:13.313715 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:17:13.313963 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:13.314080 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:13.314337 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 19:17:13.314434 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 19:17:13.314747 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:13.314854 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:13.315790 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 19:17:13.316235 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 19:17:13.332303 jq[1177]: true Feb 13 19:17:13.334663 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 19:17:13.337104 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 19:17:13.337255 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 19:17:13.337278 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 19:17:13.338045 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 19:17:13.339772 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 19:17:13.341696 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 19:17:13.341899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:13.348072 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 19:17:13.350679 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 19:17:13.350825 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:17:13.351685 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 19:17:13.351846 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:17:13.354253 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 19:17:13.358680 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 19:17:13.359697 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 19:17:13.361636 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 19:17:13.362449 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 19:17:13.363983 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 19:17:13.364744 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 19:17:13.365450 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 19:17:13.369249 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 19:17:13.387276 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 19:17:13.392065 systemd-journald[1158]: Time spent on flushing to /var/log/journal/1c35b49170fb45dd83e0254d38b05cbf is 172.803ms for 1835 entries. Feb 13 19:17:13.392065 systemd-journald[1158]: System Journal (/var/log/journal/1c35b49170fb45dd83e0254d38b05cbf) is 8M, max 584.8M, 576.8M free. Feb 13 19:17:13.593557 systemd-journald[1158]: Received client request to flush runtime journal. Feb 13 19:17:13.593605 kernel: loop0: detected capacity change from 0 to 138176 Feb 13 19:17:13.412696 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 19:17:13.412921 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 19:17:13.417110 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 19:17:13.449846 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 19:17:13.468875 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Feb 13 19:17:13.468884 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Feb 13 19:17:13.476039 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 19:17:13.486712 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 19:17:13.489136 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 19:17:13.493641 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 19:17:13.500313 udevadm[1237]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 19:17:13.594991 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 19:17:13.597554 ignition[1215]: Ignition 2.20.0 Feb 13 19:17:13.597738 ignition[1215]: deleting config from guestinfo properties Feb 13 19:17:13.640635 ignition[1215]: Successfully deleted config Feb 13 19:17:13.642131 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Feb 13 19:17:13.673523 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 19:17:13.720602 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 19:17:13.754601 kernel: loop1: detected capacity change from 0 to 205544 Feb 13 19:17:13.775082 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 19:17:13.782737 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 19:17:13.793043 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Feb 13 19:17:13.793268 systemd-tmpfiles[1249]: ACLs are not supported, ignoring. Feb 13 19:17:13.796518 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 19:17:14.076174 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 19:17:14.124612 kernel: loop2: detected capacity change from 0 to 2960 Feb 13 19:17:14.351605 kernel: loop3: detected capacity change from 0 to 147912 Feb 13 19:17:14.476623 kernel: loop4: detected capacity change from 0 to 138176 Feb 13 19:17:14.540356 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 19:17:14.545726 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 19:17:14.562601 kernel: loop5: detected capacity change from 0 to 205544 Feb 13 19:17:14.563233 systemd-udevd[1258]: Using default interface naming scheme 'v255'. Feb 13 19:17:14.591605 kernel: loop6: detected capacity change from 0 to 2960 Feb 13 19:17:14.609225 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 19:17:14.620056 kernel: loop7: detected capacity change from 0 to 147912 Feb 13 19:17:14.618786 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 19:17:14.636012 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 19:17:14.646626 (sd-merge)[1256]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Feb 13 19:17:14.647045 (sd-merge)[1256]: Merged extensions into '/usr'. Feb 13 19:17:14.656567 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 19:17:14.656593 systemd[1]: Reloading... Feb 13 19:17:14.764590 zram_generator::config[1308]: No configuration found. Feb 13 19:17:14.764658 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 19:17:14.794548 kernel: ACPI: button: Power Button [PWRF] Feb 13 19:17:14.905232 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1268) Feb 13 19:17:14.931386 systemd-networkd[1260]: lo: Link UP Feb 13 19:17:14.933545 systemd-networkd[1260]: lo: Gained carrier Feb 13 19:17:14.937738 systemd-networkd[1260]: Enumeration completed Feb 13 19:17:14.939044 systemd-networkd[1260]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Feb 13 19:17:14.946146 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Feb 13 19:17:14.946320 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Feb 13 19:17:14.949745 systemd-networkd[1260]: ens192: Link UP Feb 13 19:17:14.949924 systemd-networkd[1260]: ens192: Gained carrier Feb 13 19:17:14.953115 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 19:17:14.985014 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:15.002598 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Feb 13 19:17:15.021822 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 19:17:15.041312 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 19:17:15.041566 (udev-worker)[1276]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Feb 13 19:17:15.071955 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 19:17:15.072212 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Feb 13 19:17:15.072680 systemd[1]: Reloading finished in 415 ms. Feb 13 19:17:15.094000 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 19:17:15.094347 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 19:17:15.094699 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 19:17:15.100871 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 19:17:15.121629 systemd[1]: Starting ensure-sysext.service... Feb 13 19:17:15.125727 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 19:17:15.127603 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 19:17:15.130715 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 19:17:15.134932 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 19:17:15.137393 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 19:17:15.139729 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 19:17:15.156665 systemd[1]: Reload requested from client PID 1379 ('systemctl') (unit ensure-sysext.service)... Feb 13 19:17:15.156682 systemd[1]: Reloading... Feb 13 19:17:15.172594 lvm[1380]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:17:15.197189 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 19:17:15.197778 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 19:17:15.198338 systemd-tmpfiles[1384]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 19:17:15.199260 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Feb 13 19:17:15.199348 systemd-tmpfiles[1384]: ACLs are not supported, ignoring. Feb 13 19:17:15.213589 zram_generator::config[1418]: No configuration found. Feb 13 19:17:15.225805 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:17:15.225815 systemd-tmpfiles[1384]: Skipping /boot Feb 13 19:17:15.232040 systemd-tmpfiles[1384]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 19:17:15.232048 systemd-tmpfiles[1384]: Skipping /boot Feb 13 19:17:15.285717 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 19:17:15.304783 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:17:15.365551 systemd[1]: Reloading finished in 208 ms. Feb 13 19:17:15.383866 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 19:17:15.384214 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 19:17:15.384503 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 19:17:15.384819 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 19:17:15.390418 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 19:17:15.393849 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:17:15.399765 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 19:17:15.404679 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 19:17:15.406873 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 19:17:15.410925 lvm[1485]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 19:17:15.414769 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 19:17:15.420760 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 19:17:15.422652 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:17:15.424069 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:15.428042 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:15.430121 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:15.431627 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:15.432761 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:15.432826 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:17:15.433521 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:15.433658 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:15.440045 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:15.440182 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:15.440552 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:15.440676 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:15.446932 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:17:15.453755 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:15.455437 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 19:17:15.458526 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 19:17:15.458951 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:15.459028 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:15.459094 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:17:15.460305 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 19:17:15.461157 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 19:17:15.464247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:15.468055 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:15.477188 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 19:17:15.477809 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 19:17:15.478233 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 19:17:15.478607 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 19:17:15.480545 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:17:15.481407 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 19:17:15.482674 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 19:17:15.482867 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 19:17:15.482894 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 19:17:15.482927 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 19:17:15.482966 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 19:17:15.484622 systemd[1]: Finished ensure-sysext.service. Feb 13 19:17:15.486037 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 19:17:15.486158 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 19:17:15.486690 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 19:17:15.488708 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 19:17:15.489017 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 19:17:15.489239 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 19:17:15.553273 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 19:17:15.553634 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 19:17:15.565801 systemd-resolved[1487]: Positive Trust Anchors: Feb 13 19:17:15.565813 systemd-resolved[1487]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 19:17:15.565838 systemd-resolved[1487]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 19:17:15.596571 systemd-resolved[1487]: Defaulting to hostname 'linux'. Feb 13 19:17:15.597889 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 19:17:15.598051 systemd[1]: Reached target network.target - Network. Feb 13 19:17:15.598127 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 19:17:15.603440 augenrules[1530]: No rules Feb 13 19:17:15.604173 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:17:15.604312 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:17:15.641672 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 19:17:15.688619 ldconfig[1204]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 19:17:15.695272 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 19:17:15.698235 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 19:17:15.703017 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 19:17:15.711154 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 19:17:15.738082 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 19:17:15.738355 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 19:17:15.738381 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 19:17:15.738543 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 19:17:15.738689 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 19:17:15.738917 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 19:17:15.739100 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 19:17:15.739248 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 19:17:15.739391 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 19:17:15.739419 systemd[1]: Reached target paths.target - Path Units. Feb 13 19:17:15.739533 systemd[1]: Reached target timers.target - Timer Units. Feb 13 19:17:15.740968 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 19:17:15.742774 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 19:17:15.744683 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 19:17:15.744949 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 19:17:15.745081 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 19:17:15.746910 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 19:17:15.747307 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 19:17:15.747961 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 19:17:15.748127 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 19:17:15.748232 systemd[1]: Reached target basic.target - Basic System. Feb 13 19:17:15.748364 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:17:15.748384 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 19:17:15.749301 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 19:17:15.751733 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 19:17:15.754159 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 19:17:15.761818 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 19:17:15.762184 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 19:17:15.763899 jq[1548]: false Feb 13 19:17:15.763964 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 19:17:15.766741 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 19:17:15.769712 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 19:17:15.775563 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 19:17:15.776242 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 19:17:15.777001 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 19:17:15.783716 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 19:17:15.785700 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 19:17:15.789118 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Feb 13 19:17:15.790843 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 19:17:15.790989 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 19:17:15.791205 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 19:17:15.791573 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 19:17:15.799610 jq[1558]: true Feb 13 19:17:15.812315 update_engine[1556]: I20250213 19:17:15.812230 1556 main.cc:92] Flatcar Update Engine starting Feb 13 19:17:15.817553 jq[1569]: true Feb 13 19:17:15.816316 (ntainerd)[1572]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 19:17:15.817901 extend-filesystems[1549]: Found loop4 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found loop5 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found loop6 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found loop7 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found sda Feb 13 19:17:15.818098 extend-filesystems[1549]: Found sda1 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found sda2 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found sda3 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found usr Feb 13 19:17:15.818098 extend-filesystems[1549]: Found sda4 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found sda6 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found sda7 Feb 13 19:17:15.818098 extend-filesystems[1549]: Found sda9 Feb 13 19:17:15.822256 extend-filesystems[1549]: Checking size of /dev/sda9 Feb 13 19:17:15.819265 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 19:17:15.819631 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 19:17:15.829677 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Feb 13 19:17:15.838718 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Feb 13 19:18:30.449556 systemd-resolved[1487]: Clock change detected. Flushing caches. Feb 13 19:18:30.449580 systemd-timesyncd[1521]: Contacted time server 155.248.202.205:123 (0.flatcar.pool.ntp.org). Feb 13 19:18:30.449606 systemd-timesyncd[1521]: Initial clock synchronization to Thu 2025-02-13 19:18:30.449524 UTC. Feb 13 19:18:30.454204 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Feb 13 19:18:30.458901 systemd-logind[1554]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 19:18:30.458916 systemd-logind[1554]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 19:18:30.460302 systemd-logind[1554]: New seat seat0. Feb 13 19:18:30.462424 extend-filesystems[1549]: Old size kept for /dev/sda9 Feb 13 19:18:30.464375 extend-filesystems[1549]: Found sr0 Feb 13 19:18:30.463075 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 19:18:30.463318 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 19:18:30.464952 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 19:18:30.474064 unknown[1579]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Feb 13 19:18:30.477617 unknown[1579]: Core dump limit set to -1 Feb 13 19:18:30.528737 sshd_keygen[1574]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 19:18:31.100246 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1261) Feb 13 19:18:31.102541 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 19:18:31.117337 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 19:18:31.121070 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 19:18:31.128402 update_engine[1556]: I20250213 19:18:31.123405 1556 update_check_scheduler.cc:74] Next update check in 3m20s Feb 13 19:18:31.121386 dbus-daemon[1547]: [system] SELinux support is enabled Feb 13 19:18:31.121230 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 19:18:31.126093 dbus-daemon[1547]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 13 19:18:31.121786 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 19:18:31.124181 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 19:18:31.124199 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 19:18:31.131718 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 19:18:31.131876 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 19:18:31.131895 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 19:18:31.132155 systemd[1]: Started update-engine.service - Update Engine. Feb 13 19:18:31.135288 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 19:18:31.153252 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 19:18:31.158019 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 19:18:31.163329 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 19:18:31.163562 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 19:18:31.200513 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Feb 13 19:18:31.201362 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 19:18:31.202586 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Feb 13 19:18:31.213469 systemd-networkd[1260]: ens192: Gained IPv6LL Feb 13 19:18:31.217775 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 19:18:31.218573 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 19:18:31.227373 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Feb 13 19:18:31.235503 locksmithd[1627]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 19:18:31.247277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:31.248610 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 19:18:31.300034 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 19:18:31.316193 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:18:31.316884 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Feb 13 19:18:31.317509 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 19:18:31.505515 containerd[1572]: time="2025-02-13T19:18:31.505448226Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 19:18:31.518590 containerd[1572]: time="2025-02-13T19:18:31.518565196Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519424654Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519442203Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519452243Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519538624Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519548173Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519612156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519620791Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519731282Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519739460Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519746564Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520073 containerd[1572]: time="2025-02-13T19:18:31.519751556Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520268 containerd[1572]: time="2025-02-13T19:18:31.519790413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520268 containerd[1572]: time="2025-02-13T19:18:31.519900016Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520268 containerd[1572]: time="2025-02-13T19:18:31.519964408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 19:18:31.520268 containerd[1572]: time="2025-02-13T19:18:31.519971936Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 19:18:31.520268 containerd[1572]: time="2025-02-13T19:18:31.520016778Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 19:18:31.520268 containerd[1572]: time="2025-02-13T19:18:31.520046293Z" level=info msg="metadata content store policy set" policy=shared Feb 13 19:18:31.544328 containerd[1572]: time="2025-02-13T19:18:31.544296354Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 19:18:31.544449 containerd[1572]: time="2025-02-13T19:18:31.544347023Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 19:18:31.544449 containerd[1572]: time="2025-02-13T19:18:31.544382217Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 19:18:31.544449 containerd[1572]: time="2025-02-13T19:18:31.544399390Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 19:18:31.544449 containerd[1572]: time="2025-02-13T19:18:31.544410342Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 19:18:31.544536 containerd[1572]: time="2025-02-13T19:18:31.544522162Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 19:18:31.544721 containerd[1572]: time="2025-02-13T19:18:31.544708837Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 19:18:31.544800 containerd[1572]: time="2025-02-13T19:18:31.544786718Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 19:18:31.544827 containerd[1572]: time="2025-02-13T19:18:31.544801498Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 19:18:31.544827 containerd[1572]: time="2025-02-13T19:18:31.544813687Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 19:18:31.544827 containerd[1572]: time="2025-02-13T19:18:31.544823971Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 19:18:31.544890 containerd[1572]: time="2025-02-13T19:18:31.544833144Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 19:18:31.544890 containerd[1572]: time="2025-02-13T19:18:31.544841486Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 19:18:31.544890 containerd[1572]: time="2025-02-13T19:18:31.544857472Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 19:18:31.544890 containerd[1572]: time="2025-02-13T19:18:31.544869127Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 19:18:31.544890 containerd[1572]: time="2025-02-13T19:18:31.544879491Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 19:18:31.544890 containerd[1572]: time="2025-02-13T19:18:31.544888114Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544895998Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544910157Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544919364Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544928414Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544938001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544946715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544955566Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544964334Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544973209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544982546Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.544995 containerd[1572]: time="2025-02-13T19:18:31.544992680Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545000685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545009279Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545017327Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545026617Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545040366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545049435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545057430Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545090312Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545104469Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545112301Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545142749Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545150296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545166553Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 19:18:31.545282 containerd[1572]: time="2025-02-13T19:18:31.545174406Z" level=info msg="NRI interface is disabled by configuration." Feb 13 19:18:31.546689 containerd[1572]: time="2025-02-13T19:18:31.545182544Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 19:18:31.546720 containerd[1572]: time="2025-02-13T19:18:31.545388942Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 19:18:31.546720 containerd[1572]: time="2025-02-13T19:18:31.545423667Z" level=info msg="Connect containerd service" Feb 13 19:18:31.546720 containerd[1572]: time="2025-02-13T19:18:31.545444910Z" level=info msg="using legacy CRI server" Feb 13 19:18:31.546720 containerd[1572]: time="2025-02-13T19:18:31.545450299Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 19:18:31.546720 containerd[1572]: time="2025-02-13T19:18:31.545608435Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 19:18:31.546720 containerd[1572]: time="2025-02-13T19:18:31.546202838Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:18:31.546720 containerd[1572]: time="2025-02-13T19:18:31.546664789Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 19:18:31.546720 containerd[1572]: time="2025-02-13T19:18:31.546701128Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 19:18:31.546961 containerd[1572]: time="2025-02-13T19:18:31.546769522Z" level=info msg="Start subscribing containerd event" Feb 13 19:18:31.546961 containerd[1572]: time="2025-02-13T19:18:31.546800819Z" level=info msg="Start recovering state" Feb 13 19:18:31.546961 containerd[1572]: time="2025-02-13T19:18:31.546863458Z" level=info msg="Start event monitor" Feb 13 19:18:31.546961 containerd[1572]: time="2025-02-13T19:18:31.546878174Z" level=info msg="Start snapshots syncer" Feb 13 19:18:31.546961 containerd[1572]: time="2025-02-13T19:18:31.546885259Z" level=info msg="Start cni network conf syncer for default" Feb 13 19:18:31.546961 containerd[1572]: time="2025-02-13T19:18:31.546911914Z" level=info msg="Start streaming server" Feb 13 19:18:31.547021 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 19:18:31.548019 containerd[1572]: time="2025-02-13T19:18:31.547737226Z" level=info msg="containerd successfully booted in 0.042788s" Feb 13 19:18:32.638202 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:32.638618 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 19:18:32.640515 systemd[1]: Startup finished in 1.027s (kernel) + 6.825s (initrd) + 6.625s (userspace) = 14.478s. Feb 13 19:18:32.644809 (kubelet)[1716]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:18:32.685184 login[1636]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 19:18:32.686714 login[1637]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 19:18:32.693128 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 19:18:32.703315 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 19:18:32.707256 systemd-logind[1554]: New session 2 of user core. Feb 13 19:18:32.710127 systemd-logind[1554]: New session 1 of user core. Feb 13 19:18:32.714225 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 19:18:32.721420 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 19:18:32.723753 (systemd)[1723]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 19:18:32.725440 systemd-logind[1554]: New session c1 of user core. Feb 13 19:18:32.829983 systemd[1723]: Queued start job for default target default.target. Feb 13 19:18:32.839063 systemd[1723]: Created slice app.slice - User Application Slice. Feb 13 19:18:32.839163 systemd[1723]: Reached target paths.target - Paths. Feb 13 19:18:32.839276 systemd[1723]: Reached target timers.target - Timers. Feb 13 19:18:32.840143 systemd[1723]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 19:18:32.849332 systemd[1723]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 19:18:32.849375 systemd[1723]: Reached target sockets.target - Sockets. Feb 13 19:18:32.849403 systemd[1723]: Reached target basic.target - Basic System. Feb 13 19:18:32.849427 systemd[1723]: Reached target default.target - Main User Target. Feb 13 19:18:32.849444 systemd[1723]: Startup finished in 119ms. Feb 13 19:18:32.849550 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 19:18:32.852219 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 19:18:32.853015 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 19:18:33.243200 kubelet[1716]: E0213 19:18:33.243136 1716 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:18:33.244518 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:18:33.244613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:18:33.244833 systemd[1]: kubelet.service: Consumed 624ms CPU time, 235.9M memory peak. Feb 13 19:18:43.279733 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 19:18:43.289282 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:43.496208 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:43.499134 (kubelet)[1767]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:18:43.583318 kubelet[1767]: E0213 19:18:43.583214 1767 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:18:43.585628 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:18:43.585721 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:18:43.586014 systemd[1]: kubelet.service: Consumed 90ms CPU time, 98.3M memory peak. Feb 13 19:18:53.779721 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 19:18:53.790318 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:18:53.853410 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:18:53.855729 (kubelet)[1781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:18:53.889307 kubelet[1781]: E0213 19:18:53.889252 1781 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:18:53.890799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:18:53.890937 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:18:53.891162 systemd[1]: kubelet.service: Consumed 76ms CPU time, 97.8M memory peak. Feb 13 19:19:04.029663 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Feb 13 19:19:04.042281 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:04.242291 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:04.254290 (kubelet)[1796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 19:19:04.281209 kubelet[1796]: E0213 19:19:04.281130 1796 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 19:19:04.282583 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 19:19:04.282677 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 19:19:04.282964 systemd[1]: kubelet.service: Consumed 77ms CPU time, 97.5M memory peak. Feb 13 19:19:10.545110 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 19:19:10.546261 systemd[1]: Started sshd@0-139.178.70.110:22-139.178.89.65:44874.service - OpenSSH per-connection server daemon (139.178.89.65:44874). Feb 13 19:19:10.628207 sshd[1804]: Accepted publickey for core from 139.178.89.65 port 44874 ssh2: RSA SHA256:NL/G37P9/eR99zDJKW+V9taUH0wkZ8ddZwzfBGT7QcM Feb 13 19:19:10.628963 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:10.632715 systemd-logind[1554]: New session 3 of user core. Feb 13 19:19:10.638239 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 19:19:10.691279 systemd[1]: Started sshd@1-139.178.70.110:22-139.178.89.65:44882.service - OpenSSH per-connection server daemon (139.178.89.65:44882). Feb 13 19:19:10.723460 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 44882 ssh2: RSA SHA256:NL/G37P9/eR99zDJKW+V9taUH0wkZ8ddZwzfBGT7QcM Feb 13 19:19:10.724533 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:10.727621 systemd-logind[1554]: New session 4 of user core. Feb 13 19:19:10.732219 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 19:19:10.782352 sshd[1811]: Connection closed by 139.178.89.65 port 44882 Feb 13 19:19:10.782603 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:10.794248 systemd[1]: sshd@1-139.178.70.110:22-139.178.89.65:44882.service: Deactivated successfully. Feb 13 19:19:10.795485 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 19:19:10.796661 systemd-logind[1554]: Session 4 logged out. Waiting for processes to exit. Feb 13 19:19:10.800328 systemd[1]: Started sshd@2-139.178.70.110:22-139.178.89.65:44898.service - OpenSSH per-connection server daemon (139.178.89.65:44898). Feb 13 19:19:10.801794 systemd-logind[1554]: Removed session 4. Feb 13 19:19:10.832549 sshd[1816]: Accepted publickey for core from 139.178.89.65 port 44898 ssh2: RSA SHA256:NL/G37P9/eR99zDJKW+V9taUH0wkZ8ddZwzfBGT7QcM Feb 13 19:19:10.833535 sshd-session[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:10.838175 systemd-logind[1554]: New session 5 of user core. Feb 13 19:19:10.844281 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 19:19:10.890105 sshd[1819]: Connection closed by 139.178.89.65 port 44898 Feb 13 19:19:10.890960 sshd-session[1816]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:10.899357 systemd[1]: Started sshd@3-139.178.70.110:22-139.178.89.65:44908.service - OpenSSH per-connection server daemon (139.178.89.65:44908). Feb 13 19:19:10.899704 systemd[1]: sshd@2-139.178.70.110:22-139.178.89.65:44898.service: Deactivated successfully. Feb 13 19:19:10.900509 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 19:19:10.901723 systemd-logind[1554]: Session 5 logged out. Waiting for processes to exit. Feb 13 19:19:10.902398 systemd-logind[1554]: Removed session 5. Feb 13 19:19:10.933447 sshd[1822]: Accepted publickey for core from 139.178.89.65 port 44908 ssh2: RSA SHA256:NL/G37P9/eR99zDJKW+V9taUH0wkZ8ddZwzfBGT7QcM Feb 13 19:19:10.934167 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:10.936980 systemd-logind[1554]: New session 6 of user core. Feb 13 19:19:10.943276 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 19:19:10.992131 sshd[1827]: Connection closed by 139.178.89.65 port 44908 Feb 13 19:19:10.992425 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:11.001204 systemd[1]: sshd@3-139.178.70.110:22-139.178.89.65:44908.service: Deactivated successfully. Feb 13 19:19:11.002174 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 19:19:11.002674 systemd-logind[1554]: Session 6 logged out. Waiting for processes to exit. Feb 13 19:19:11.006639 systemd[1]: Started sshd@4-139.178.70.110:22-139.178.89.65:44918.service - OpenSSH per-connection server daemon (139.178.89.65:44918). Feb 13 19:19:11.008371 systemd-logind[1554]: Removed session 6. Feb 13 19:19:11.036148 sshd[1832]: Accepted publickey for core from 139.178.89.65 port 44918 ssh2: RSA SHA256:NL/G37P9/eR99zDJKW+V9taUH0wkZ8ddZwzfBGT7QcM Feb 13 19:19:11.037044 sshd-session[1832]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:11.040767 systemd-logind[1554]: New session 7 of user core. Feb 13 19:19:11.051294 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 19:19:11.169222 sudo[1836]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 19:19:11.169411 sudo[1836]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:19:11.188672 sudo[1836]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:11.189478 sshd[1835]: Connection closed by 139.178.89.65 port 44918 Feb 13 19:19:11.189798 sshd-session[1832]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:11.199085 systemd[1]: sshd@4-139.178.70.110:22-139.178.89.65:44918.service: Deactivated successfully. Feb 13 19:19:11.199945 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 19:19:11.200422 systemd-logind[1554]: Session 7 logged out. Waiting for processes to exit. Feb 13 19:19:11.205342 systemd[1]: Started sshd@5-139.178.70.110:22-139.178.89.65:44926.service - OpenSSH per-connection server daemon (139.178.89.65:44926). Feb 13 19:19:11.207231 systemd-logind[1554]: Removed session 7. Feb 13 19:19:11.235645 sshd[1841]: Accepted publickey for core from 139.178.89.65 port 44926 ssh2: RSA SHA256:NL/G37P9/eR99zDJKW+V9taUH0wkZ8ddZwzfBGT7QcM Feb 13 19:19:11.236446 sshd-session[1841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:11.238982 systemd-logind[1554]: New session 8 of user core. Feb 13 19:19:11.248233 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 19:19:11.297509 sudo[1846]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 19:19:11.297673 sudo[1846]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:19:11.299659 sudo[1846]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:11.302727 sudo[1845]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 19:19:11.303340 sudo[1845]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:19:11.311357 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 19:19:11.327809 augenrules[1868]: No rules Feb 13 19:19:11.328631 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 19:19:11.328768 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 19:19:11.329367 sudo[1845]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:11.330859 sshd[1844]: Connection closed by 139.178.89.65 port 44926 Feb 13 19:19:11.331027 sshd-session[1841]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:11.335897 systemd[1]: sshd@5-139.178.70.110:22-139.178.89.65:44926.service: Deactivated successfully. Feb 13 19:19:11.336744 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 19:19:11.337267 systemd-logind[1554]: Session 8 logged out. Waiting for processes to exit. Feb 13 19:19:11.342286 systemd[1]: Started sshd@6-139.178.70.110:22-139.178.89.65:44928.service - OpenSSH per-connection server daemon (139.178.89.65:44928). Feb 13 19:19:11.343413 systemd-logind[1554]: Removed session 8. Feb 13 19:19:11.371156 sshd[1876]: Accepted publickey for core from 139.178.89.65 port 44928 ssh2: RSA SHA256:NL/G37P9/eR99zDJKW+V9taUH0wkZ8ddZwzfBGT7QcM Feb 13 19:19:11.371925 sshd-session[1876]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 19:19:11.374670 systemd-logind[1554]: New session 9 of user core. Feb 13 19:19:11.381289 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 19:19:11.429101 sudo[1880]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 19:19:11.429282 sudo[1880]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 19:19:11.444386 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Feb 13 19:19:11.460167 systemd[1]: coreos-metadata.service: Deactivated successfully. Feb 13 19:19:11.460312 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Feb 13 19:19:11.885830 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:11.885935 systemd[1]: kubelet.service: Consumed 77ms CPU time, 97.5M memory peak. Feb 13 19:19:11.891275 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:11.909248 systemd[1]: Reload requested from client PID 1926 ('systemctl') (unit session-9.scope)... Feb 13 19:19:11.909259 systemd[1]: Reloading... Feb 13 19:19:11.981198 zram_generator::config[1977]: No configuration found. Feb 13 19:19:12.042710 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Feb 13 19:19:12.060712 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 19:19:12.125221 systemd[1]: Reloading finished in 215 ms. Feb 13 19:19:12.161220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:12.162009 (kubelet)[2030]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:19:12.167372 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:12.169499 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 19:19:12.169641 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:12.169676 systemd[1]: kubelet.service: Consumed 53ms CPU time, 85.4M memory peak. Feb 13 19:19:12.177371 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 19:19:12.541554 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 19:19:12.545127 (kubelet)[2047]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 19:19:12.583732 kubelet[2047]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:12.583732 kubelet[2047]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 19:19:12.583732 kubelet[2047]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 19:19:12.583990 kubelet[2047]: I0213 19:19:12.583774 2047 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 19:19:12.890340 kubelet[2047]: I0213 19:19:12.890280 2047 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 19:19:12.890340 kubelet[2047]: I0213 19:19:12.890299 2047 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 19:19:12.890619 kubelet[2047]: I0213 19:19:12.890592 2047 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 19:19:12.907629 kubelet[2047]: I0213 19:19:12.907547 2047 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 19:19:12.920520 kubelet[2047]: E0213 19:19:12.920491 2047 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 19:19:12.920520 kubelet[2047]: I0213 19:19:12.920518 2047 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 19:19:12.923702 kubelet[2047]: I0213 19:19:12.923680 2047 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 19:19:12.925679 kubelet[2047]: I0213 19:19:12.925641 2047 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 19:19:12.925847 kubelet[2047]: I0213 19:19:12.925821 2047 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 19:19:12.925971 kubelet[2047]: I0213 19:19:12.925847 2047 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.67.124.142","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 19:19:12.926050 kubelet[2047]: I0213 19:19:12.925979 2047 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 19:19:12.926050 kubelet[2047]: I0213 19:19:12.925984 2047 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 19:19:12.926084 kubelet[2047]: I0213 19:19:12.926063 2047 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:12.936720 kubelet[2047]: I0213 19:19:12.936671 2047 kubelet.go:408] "Attempting to sync node with API server" Feb 13 19:19:12.936720 kubelet[2047]: I0213 19:19:12.936717 2047 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 19:19:12.936882 kubelet[2047]: I0213 19:19:12.936744 2047 kubelet.go:314] "Adding apiserver pod source" Feb 13 19:19:12.936882 kubelet[2047]: I0213 19:19:12.936759 2047 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 19:19:12.943646 kubelet[2047]: E0213 19:19:12.943456 2047 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:12.943646 kubelet[2047]: E0213 19:19:12.943498 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:12.944068 kubelet[2047]: I0213 19:19:12.943936 2047 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 19:19:12.945679 kubelet[2047]: I0213 19:19:12.945650 2047 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 19:19:12.946486 kubelet[2047]: W0213 19:19:12.946469 2047 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 19:19:12.946862 kubelet[2047]: I0213 19:19:12.946848 2047 server.go:1269] "Started kubelet" Feb 13 19:19:12.949129 kubelet[2047]: I0213 19:19:12.948694 2047 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 19:19:12.949442 kubelet[2047]: I0213 19:19:12.949433 2047 server.go:460] "Adding debug handlers to kubelet server" Feb 13 19:19:12.950844 kubelet[2047]: I0213 19:19:12.950826 2047 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 19:19:12.953938 kubelet[2047]: I0213 19:19:12.953888 2047 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 19:19:12.954656 kubelet[2047]: I0213 19:19:12.954100 2047 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 19:19:12.955279 kubelet[2047]: W0213 19:19:12.955259 2047 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.67.124.142" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 19:19:12.955349 kubelet[2047]: E0213 19:19:12.955288 2047 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.67.124.142\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:19:12.955349 kubelet[2047]: W0213 19:19:12.955336 2047 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 19:19:12.955349 kubelet[2047]: E0213 19:19:12.955344 2047 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 19:19:12.955547 kubelet[2047]: I0213 19:19:12.955531 2047 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 19:19:12.962501 kubelet[2047]: E0213 19:19:12.958635 2047 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.67.124.142.1823dab5374c66bd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.67.124.142,UID:10.67.124.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.67.124.142,},FirstTimestamp:2025-02-13 19:19:12.946833085 +0000 UTC m=+0.397975441,LastTimestamp:2025-02-13 19:19:12.946833085 +0000 UTC m=+0.397975441,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.67.124.142,}" Feb 13 19:19:12.963787 kubelet[2047]: E0213 19:19:12.963256 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:12.963787 kubelet[2047]: I0213 19:19:12.963277 2047 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 19:19:12.963787 kubelet[2047]: I0213 19:19:12.963372 2047 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 19:19:12.963787 kubelet[2047]: I0213 19:19:12.963395 2047 reconciler.go:26] "Reconciler: start to sync state" Feb 13 19:19:12.964098 kubelet[2047]: E0213 19:19:12.964066 2047 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 19:19:12.964231 kubelet[2047]: I0213 19:19:12.964220 2047 factory.go:221] Registration of the systemd container factory successfully Feb 13 19:19:12.964278 kubelet[2047]: I0213 19:19:12.964267 2047 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 19:19:12.965302 kubelet[2047]: I0213 19:19:12.965157 2047 factory.go:221] Registration of the containerd container factory successfully Feb 13 19:19:12.974695 kubelet[2047]: E0213 19:19:12.974629 2047 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.67.124.142.1823dab538534285 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.67.124.142,UID:10.67.124.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.67.124.142,},FirstTimestamp:2025-02-13 19:19:12.964059781 +0000 UTC m=+0.415202140,LastTimestamp:2025-02-13 19:19:12.964059781 +0000 UTC m=+0.415202140,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.67.124.142,}" Feb 13 19:19:12.981179 kubelet[2047]: E0213 19:19:12.981036 2047 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.67.124.142.1823dab5394df6cc default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.67.124.142,UID:10.67.124.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.67.124.142 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.67.124.142,},FirstTimestamp:2025-02-13 19:19:12.980489932 +0000 UTC m=+0.431632288,LastTimestamp:2025-02-13 19:19:12.980489932 +0000 UTC m=+0.431632288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.67.124.142,}" Feb 13 19:19:12.981378 kubelet[2047]: E0213 19:19:12.981262 2047 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.67.124.142\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 19:19:12.981564 kubelet[2047]: I0213 19:19:12.981505 2047 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 19:19:12.981564 kubelet[2047]: I0213 19:19:12.981513 2047 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 19:19:12.981564 kubelet[2047]: I0213 19:19:12.981522 2047 state_mem.go:36] "Initialized new in-memory state store" Feb 13 19:19:12.981737 kubelet[2047]: W0213 19:19:12.981720 2047 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 19:19:12.981758 kubelet[2047]: E0213 19:19:12.981735 2047 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 19:19:12.982822 kubelet[2047]: I0213 19:19:12.982768 2047 policy_none.go:49] "None policy: Start" Feb 13 19:19:12.984186 kubelet[2047]: I0213 19:19:12.983089 2047 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 19:19:12.984186 kubelet[2047]: I0213 19:19:12.983529 2047 state_mem.go:35] "Initializing new in-memory state store" Feb 13 19:19:12.984857 kubelet[2047]: E0213 19:19:12.984807 2047 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.67.124.142.1823dab5394e0191 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.67.124.142,UID:10.67.124.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.67.124.142 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.67.124.142,},FirstTimestamp:2025-02-13 19:19:12.980492689 +0000 UTC m=+0.431635047,LastTimestamp:2025-02-13 19:19:12.980492689 +0000 UTC m=+0.431635047,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.67.124.142,}" Feb 13 19:19:12.988814 kubelet[2047]: E0213 19:19:12.988701 2047 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.67.124.142.1823dab5394e07a8 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.67.124.142,UID:10.67.124.142,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.67.124.142 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.67.124.142,},FirstTimestamp:2025-02-13 19:19:12.980494248 +0000 UTC m=+0.431636605,LastTimestamp:2025-02-13 19:19:12.980494248 +0000 UTC m=+0.431636605,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.67.124.142,}" Feb 13 19:19:12.991086 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 19:19:13.000700 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 19:19:13.003226 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 19:19:13.010065 kubelet[2047]: I0213 19:19:13.008707 2047 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 19:19:13.010065 kubelet[2047]: I0213 19:19:13.008829 2047 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 19:19:13.010065 kubelet[2047]: I0213 19:19:13.008835 2047 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 19:19:13.010065 kubelet[2047]: I0213 19:19:13.009189 2047 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 19:19:13.010065 kubelet[2047]: E0213 19:19:13.010007 2047 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.67.124.142\" not found" Feb 13 19:19:13.016344 kubelet[2047]: I0213 19:19:13.016311 2047 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 19:19:13.017508 kubelet[2047]: I0213 19:19:13.017297 2047 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 19:19:13.017508 kubelet[2047]: I0213 19:19:13.017310 2047 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 19:19:13.017508 kubelet[2047]: I0213 19:19:13.017321 2047 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 19:19:13.017508 kubelet[2047]: E0213 19:19:13.017376 2047 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 19:19:13.110146 kubelet[2047]: I0213 19:19:13.110005 2047 kubelet_node_status.go:72] "Attempting to register node" node="10.67.124.142" Feb 13 19:19:13.120798 kubelet[2047]: I0213 19:19:13.120746 2047 kubelet_node_status.go:75] "Successfully registered node" node="10.67.124.142" Feb 13 19:19:13.120798 kubelet[2047]: E0213 19:19:13.120772 2047 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.67.124.142\": node \"10.67.124.142\" not found" Feb 13 19:19:13.129313 kubelet[2047]: E0213 19:19:13.129286 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:13.230448 kubelet[2047]: E0213 19:19:13.230367 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:13.331060 kubelet[2047]: E0213 19:19:13.331031 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:13.431751 kubelet[2047]: E0213 19:19:13.431712 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:13.532018 kubelet[2047]: E0213 19:19:13.531997 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:13.564195 sudo[1880]: pam_unix(sudo:session): session closed for user root Feb 13 19:19:13.565134 sshd[1879]: Connection closed by 139.178.89.65 port 44928 Feb 13 19:19:13.565374 sshd-session[1876]: pam_unix(sshd:session): session closed for user core Feb 13 19:19:13.567439 systemd[1]: sshd@6-139.178.70.110:22-139.178.89.65:44928.service: Deactivated successfully. Feb 13 19:19:13.568566 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 19:19:13.568673 systemd[1]: session-9.scope: Consumed 276ms CPU time, 73.4M memory peak. Feb 13 19:19:13.569430 systemd-logind[1554]: Session 9 logged out. Waiting for processes to exit. Feb 13 19:19:13.569923 systemd-logind[1554]: Removed session 9. Feb 13 19:19:13.632808 kubelet[2047]: E0213 19:19:13.632776 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:13.733321 kubelet[2047]: E0213 19:19:13.733295 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:13.833825 kubelet[2047]: E0213 19:19:13.833737 2047 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.67.124.142\" not found" Feb 13 19:19:13.892158 kubelet[2047]: I0213 19:19:13.891986 2047 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 19:19:13.892158 kubelet[2047]: W0213 19:19:13.892107 2047 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 19:19:13.934285 kubelet[2047]: I0213 19:19:13.934266 2047 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 19:19:13.934511 containerd[1572]: time="2025-02-13T19:19:13.934457794Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 19:19:13.934718 kubelet[2047]: I0213 19:19:13.934567 2047 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 19:19:13.944322 kubelet[2047]: E0213 19:19:13.944298 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:13.944322 kubelet[2047]: I0213 19:19:13.944325 2047 apiserver.go:52] "Watching apiserver" Feb 13 19:19:13.955758 kubelet[2047]: E0213 19:19:13.955356 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:13.958715 systemd[1]: Created slice kubepods-besteffort-pod08f6b914_7a0a_4f0a_b6a9_1a8f9d6b17e2.slice - libcontainer container kubepods-besteffort-pod08f6b914_7a0a_4f0a_b6a9_1a8f9d6b17e2.slice. Feb 13 19:19:13.972604 kubelet[2047]: I0213 19:19:13.972542 2047 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 19:19:13.990466 systemd[1]: Created slice kubepods-besteffort-pod6f845088_99eb_48de_99d0_e2e44e202772.slice - libcontainer container kubepods-besteffort-pod6f845088_99eb_48de_99d0_e2e44e202772.slice. Feb 13 19:19:14.072070 kubelet[2047]: I0213 19:19:14.072012 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c894b613-f774-4e5f-a65e-f4bdf203df3f-registration-dir\") pod \"csi-node-driver-lgcrw\" (UID: \"c894b613-f774-4e5f-a65e-f4bdf203df3f\") " pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:14.072384 kubelet[2047]: I0213 19:19:14.072124 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6f845088-99eb-48de-99d0-e2e44e202772-xtables-lock\") pod \"kube-proxy-ks85q\" (UID: \"6f845088-99eb-48de-99d0-e2e44e202772\") " pod="kube-system/kube-proxy-ks85q" Feb 13 19:19:14.072384 kubelet[2047]: I0213 19:19:14.072139 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-policysync\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072384 kubelet[2047]: I0213 19:19:14.072148 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-tigera-ca-bundle\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072384 kubelet[2047]: I0213 19:19:14.072156 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-var-lib-calico\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072384 kubelet[2047]: I0213 19:19:14.072167 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-flexvol-driver-host\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072479 kubelet[2047]: I0213 19:19:14.072175 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c894b613-f774-4e5f-a65e-f4bdf203df3f-socket-dir\") pod \"csi-node-driver-lgcrw\" (UID: \"c894b613-f774-4e5f-a65e-f4bdf203df3f\") " pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:14.072479 kubelet[2047]: I0213 19:19:14.072183 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-cni-bin-dir\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072479 kubelet[2047]: I0213 19:19:14.072191 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6f845088-99eb-48de-99d0-e2e44e202772-lib-modules\") pod \"kube-proxy-ks85q\" (UID: \"6f845088-99eb-48de-99d0-e2e44e202772\") " pod="kube-system/kube-proxy-ks85q" Feb 13 19:19:14.072479 kubelet[2047]: I0213 19:19:14.072206 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5xqst\" (UniqueName: \"kubernetes.io/projected/c894b613-f774-4e5f-a65e-f4bdf203df3f-kube-api-access-5xqst\") pod \"csi-node-driver-lgcrw\" (UID: \"c894b613-f774-4e5f-a65e-f4bdf203df3f\") " pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:14.072479 kubelet[2047]: I0213 19:19:14.072215 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mm5ds\" (UniqueName: \"kubernetes.io/projected/6f845088-99eb-48de-99d0-e2e44e202772-kube-api-access-mm5ds\") pod \"kube-proxy-ks85q\" (UID: \"6f845088-99eb-48de-99d0-e2e44e202772\") " pod="kube-system/kube-proxy-ks85q" Feb 13 19:19:14.072595 kubelet[2047]: I0213 19:19:14.072224 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-lib-modules\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072595 kubelet[2047]: I0213 19:19:14.072233 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-xtables-lock\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072595 kubelet[2047]: I0213 19:19:14.072241 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-cni-net-dir\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072595 kubelet[2047]: I0213 19:19:14.072250 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8sr5v\" (UniqueName: \"kubernetes.io/projected/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-kube-api-access-8sr5v\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072595 kubelet[2047]: I0213 19:19:14.072258 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c894b613-f774-4e5f-a65e-f4bdf203df3f-varrun\") pod \"csi-node-driver-lgcrw\" (UID: \"c894b613-f774-4e5f-a65e-f4bdf203df3f\") " pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:14.072669 kubelet[2047]: I0213 19:19:14.072265 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/6f845088-99eb-48de-99d0-e2e44e202772-kube-proxy\") pod \"kube-proxy-ks85q\" (UID: \"6f845088-99eb-48de-99d0-e2e44e202772\") " pod="kube-system/kube-proxy-ks85q" Feb 13 19:19:14.072669 kubelet[2047]: I0213 19:19:14.072274 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-node-certs\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072669 kubelet[2047]: I0213 19:19:14.072284 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-var-run-calico\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072669 kubelet[2047]: I0213 19:19:14.072306 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2-cni-log-dir\") pod \"calico-node-2658l\" (UID: \"08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2\") " pod="calico-system/calico-node-2658l" Feb 13 19:19:14.072669 kubelet[2047]: I0213 19:19:14.072323 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c894b613-f774-4e5f-a65e-f4bdf203df3f-kubelet-dir\") pod \"csi-node-driver-lgcrw\" (UID: \"c894b613-f774-4e5f-a65e-f4bdf203df3f\") " pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:14.175835 kubelet[2047]: E0213 19:19:14.175731 2047 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:14.175835 kubelet[2047]: W0213 19:19:14.175747 2047 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:14.175835 kubelet[2047]: E0213 19:19:14.175761 2047 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:14.180710 kubelet[2047]: E0213 19:19:14.180653 2047 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:14.180710 kubelet[2047]: W0213 19:19:14.180667 2047 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:14.180710 kubelet[2047]: E0213 19:19:14.180680 2047 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:14.181339 kubelet[2047]: E0213 19:19:14.181223 2047 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:14.181339 kubelet[2047]: W0213 19:19:14.181232 2047 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:14.181339 kubelet[2047]: E0213 19:19:14.181240 2047 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:14.183565 kubelet[2047]: E0213 19:19:14.183520 2047 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 19:19:14.183565 kubelet[2047]: W0213 19:19:14.183530 2047 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 19:19:14.183565 kubelet[2047]: E0213 19:19:14.183542 2047 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 19:19:14.290802 containerd[1572]: time="2025-02-13T19:19:14.290642773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2658l,Uid:08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2,Namespace:calico-system,Attempt:0,}" Feb 13 19:19:14.291981 containerd[1572]: time="2025-02-13T19:19:14.291963267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ks85q,Uid:6f845088-99eb-48de-99d0-e2e44e202772,Namespace:kube-system,Attempt:0,}" Feb 13 19:19:14.822045 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1047609249.mount: Deactivated successfully. Feb 13 19:19:14.825343 containerd[1572]: time="2025-02-13T19:19:14.825299535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:14.826223 containerd[1572]: time="2025-02-13T19:19:14.826110493Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:14.826223 containerd[1572]: time="2025-02-13T19:19:14.826140709Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 19:19:14.827509 containerd[1572]: time="2025-02-13T19:19:14.827471245Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 19:19:14.828058 containerd[1572]: time="2025-02-13T19:19:14.827572540Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:14.830133 containerd[1572]: time="2025-02-13T19:19:14.829858587Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 537.850612ms" Feb 13 19:19:14.830878 containerd[1572]: time="2025-02-13T19:19:14.830853675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 19:19:14.831799 containerd[1572]: time="2025-02-13T19:19:14.831782583Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 541.055827ms" Feb 13 19:19:14.919446 containerd[1572]: time="2025-02-13T19:19:14.919344421Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:14.919446 containerd[1572]: time="2025-02-13T19:19:14.919414853Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:14.919446 containerd[1572]: time="2025-02-13T19:19:14.919425314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:14.919635 containerd[1572]: time="2025-02-13T19:19:14.919483168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:14.922444 containerd[1572]: time="2025-02-13T19:19:14.922261420Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:14.922444 containerd[1572]: time="2025-02-13T19:19:14.922299373Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:14.922444 containerd[1572]: time="2025-02-13T19:19:14.922328368Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:14.923101 containerd[1572]: time="2025-02-13T19:19:14.923029019Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:14.945193 kubelet[2047]: E0213 19:19:14.945167 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:14.984260 systemd[1]: Started cri-containerd-09e1d4c744eeb67c50ca0e742142075004fb2ffbbe4e8963239fbbbb9ad67042.scope - libcontainer container 09e1d4c744eeb67c50ca0e742142075004fb2ffbbe4e8963239fbbbb9ad67042. Feb 13 19:19:14.985375 systemd[1]: Started cri-containerd-f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb.scope - libcontainer container f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb. Feb 13 19:19:15.003301 containerd[1572]: time="2025-02-13T19:19:15.003022302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-2658l,Uid:08f6b914-7a0a-4f0a-b6a9-1a8f9d6b17e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb\"" Feb 13 19:19:15.004967 containerd[1572]: time="2025-02-13T19:19:15.004765374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ks85q,Uid:6f845088-99eb-48de-99d0-e2e44e202772,Namespace:kube-system,Attempt:0,} returns sandbox id \"09e1d4c744eeb67c50ca0e742142075004fb2ffbbe4e8963239fbbbb9ad67042\"" Feb 13 19:19:15.005693 containerd[1572]: time="2025-02-13T19:19:15.005537732Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 19:19:15.945571 kubelet[2047]: E0213 19:19:15.945535 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:16.018186 kubelet[2047]: E0213 19:19:16.018155 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:16.163191 update_engine[1556]: I20250213 19:19:16.163141 1556 update_attempter.cc:509] Updating boot flags... Feb 13 19:19:16.185142 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2190) Feb 13 19:19:16.230216 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2186) Feb 13 19:19:16.620460 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3692581846.mount: Deactivated successfully. Feb 13 19:19:16.671970 containerd[1572]: time="2025-02-13T19:19:16.671860662Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:16.672427 containerd[1572]: time="2025-02-13T19:19:16.672339588Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 19:19:16.672769 containerd[1572]: time="2025-02-13T19:19:16.672688334Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:16.673904 containerd[1572]: time="2025-02-13T19:19:16.673890035Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:16.674202 containerd[1572]: time="2025-02-13T19:19:16.674191732Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.668634002s" Feb 13 19:19:16.674330 containerd[1572]: time="2025-02-13T19:19:16.674254057Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 19:19:16.674934 containerd[1572]: time="2025-02-13T19:19:16.674869270Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 19:19:16.675515 containerd[1572]: time="2025-02-13T19:19:16.675501243Z" level=info msg="CreateContainer within sandbox \"f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 19:19:16.693802 containerd[1572]: time="2025-02-13T19:19:16.693777136Z" level=info msg="CreateContainer within sandbox \"f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5\"" Feb 13 19:19:16.695094 containerd[1572]: time="2025-02-13T19:19:16.694327320Z" level=info msg="StartContainer for \"04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5\"" Feb 13 19:19:16.716204 systemd[1]: Started cri-containerd-04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5.scope - libcontainer container 04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5. Feb 13 19:19:16.734671 containerd[1572]: time="2025-02-13T19:19:16.734647860Z" level=info msg="StartContainer for \"04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5\" returns successfully" Feb 13 19:19:16.740077 systemd[1]: cri-containerd-04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5.scope: Deactivated successfully. Feb 13 19:19:16.835433 containerd[1572]: time="2025-02-13T19:19:16.835388959Z" level=info msg="shim disconnected" id=04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5 namespace=k8s.io Feb 13 19:19:16.835433 containerd[1572]: time="2025-02-13T19:19:16.835427408Z" level=warning msg="cleaning up after shim disconnected" id=04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5 namespace=k8s.io Feb 13 19:19:16.835433 containerd[1572]: time="2025-02-13T19:19:16.835434737Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:16.946401 kubelet[2047]: E0213 19:19:16.945981 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:17.595975 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04462a14860e6eda56788c10e6201d8f51fa82d50d685f19d02de861030895a5-rootfs.mount: Deactivated successfully. Feb 13 19:19:17.728447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3238921639.mount: Deactivated successfully. Feb 13 19:19:17.946806 kubelet[2047]: E0213 19:19:17.946609 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:18.017923 kubelet[2047]: E0213 19:19:18.017892 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:18.038654 containerd[1572]: time="2025-02-13T19:19:18.038599019Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:18.039078 containerd[1572]: time="2025-02-13T19:19:18.039058880Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 19:19:18.039284 containerd[1572]: time="2025-02-13T19:19:18.039271001Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:18.044196 containerd[1572]: time="2025-02-13T19:19:18.044167784Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:18.044472 containerd[1572]: time="2025-02-13T19:19:18.044455822Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.369571543s" Feb 13 19:19:18.044837 containerd[1572]: time="2025-02-13T19:19:18.044475345Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 19:19:18.045159 containerd[1572]: time="2025-02-13T19:19:18.045146296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 19:19:18.045909 containerd[1572]: time="2025-02-13T19:19:18.045892232Z" level=info msg="CreateContainer within sandbox \"09e1d4c744eeb67c50ca0e742142075004fb2ffbbe4e8963239fbbbb9ad67042\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 19:19:18.090646 containerd[1572]: time="2025-02-13T19:19:18.090624066Z" level=info msg="CreateContainer within sandbox \"09e1d4c744eeb67c50ca0e742142075004fb2ffbbe4e8963239fbbbb9ad67042\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"84193663d7107b0cb9df0e949421bd5b836e1dbe98267829ffa4d3853029c6d4\"" Feb 13 19:19:18.091105 containerd[1572]: time="2025-02-13T19:19:18.091087087Z" level=info msg="StartContainer for \"84193663d7107b0cb9df0e949421bd5b836e1dbe98267829ffa4d3853029c6d4\"" Feb 13 19:19:18.120270 systemd[1]: Started cri-containerd-84193663d7107b0cb9df0e949421bd5b836e1dbe98267829ffa4d3853029c6d4.scope - libcontainer container 84193663d7107b0cb9df0e949421bd5b836e1dbe98267829ffa4d3853029c6d4. Feb 13 19:19:18.140001 containerd[1572]: time="2025-02-13T19:19:18.139967873Z" level=info msg="StartContainer for \"84193663d7107b0cb9df0e949421bd5b836e1dbe98267829ffa4d3853029c6d4\" returns successfully" Feb 13 19:19:18.946830 kubelet[2047]: E0213 19:19:18.946790 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:19.039147 kubelet[2047]: I0213 19:19:19.039025 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ks85q" podStartSLOduration=2.999758888 podStartE2EDuration="6.039007486s" podCreationTimestamp="2025-02-13 19:19:13 +0000 UTC" firstStartedPulling="2025-02-13 19:19:15.005797305 +0000 UTC m=+2.456939662" lastFinishedPulling="2025-02-13 19:19:18.045045903 +0000 UTC m=+5.496188260" observedRunningTime="2025-02-13 19:19:19.037976703 +0000 UTC m=+6.489119077" watchObservedRunningTime="2025-02-13 19:19:19.039007486 +0000 UTC m=+6.490149851" Feb 13 19:19:19.947057 kubelet[2047]: E0213 19:19:19.947020 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:20.017690 kubelet[2047]: E0213 19:19:20.017499 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:20.947262 kubelet[2047]: E0213 19:19:20.947229 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:21.504887 containerd[1572]: time="2025-02-13T19:19:21.504848605Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:21.509634 containerd[1572]: time="2025-02-13T19:19:21.509587792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 19:19:21.514669 containerd[1572]: time="2025-02-13T19:19:21.514611618Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:21.523054 containerd[1572]: time="2025-02-13T19:19:21.523006729Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:21.523501 containerd[1572]: time="2025-02-13T19:19:21.523370202Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.478207563s" Feb 13 19:19:21.523501 containerd[1572]: time="2025-02-13T19:19:21.523385687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 19:19:21.524566 containerd[1572]: time="2025-02-13T19:19:21.524547503Z" level=info msg="CreateContainer within sandbox \"f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 19:19:21.587069 containerd[1572]: time="2025-02-13T19:19:21.587014968Z" level=info msg="CreateContainer within sandbox \"f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549\"" Feb 13 19:19:21.588143 containerd[1572]: time="2025-02-13T19:19:21.587362358Z" level=info msg="StartContainer for \"f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549\"" Feb 13 19:19:21.606205 systemd[1]: Started cri-containerd-f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549.scope - libcontainer container f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549. Feb 13 19:19:21.631474 containerd[1572]: time="2025-02-13T19:19:21.631444380Z" level=info msg="StartContainer for \"f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549\" returns successfully" Feb 13 19:19:21.948279 kubelet[2047]: E0213 19:19:21.948242 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:22.018376 kubelet[2047]: E0213 19:19:22.018340 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:22.753926 containerd[1572]: time="2025-02-13T19:19:22.753806659Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 19:19:22.755049 systemd[1]: cri-containerd-f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549.scope: Deactivated successfully. Feb 13 19:19:22.755234 systemd[1]: cri-containerd-f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549.scope: Consumed 255ms CPU time, 170.4M memory peak, 151M written to disk. Feb 13 19:19:22.759683 kubelet[2047]: I0213 19:19:22.759454 2047 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 19:19:22.768698 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549-rootfs.mount: Deactivated successfully. Feb 13 19:19:22.949341 kubelet[2047]: E0213 19:19:22.949296 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:23.108331 containerd[1572]: time="2025-02-13T19:19:23.108264208Z" level=info msg="shim disconnected" id=f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549 namespace=k8s.io Feb 13 19:19:23.108331 containerd[1572]: time="2025-02-13T19:19:23.108328358Z" level=warning msg="cleaning up after shim disconnected" id=f51f71772d368fb10e61a92845ddeba7e53d378a4dc4746fae62e87c20d31549 namespace=k8s.io Feb 13 19:19:23.108331 containerd[1572]: time="2025-02-13T19:19:23.108337982Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 19:19:23.950370 kubelet[2047]: E0213 19:19:23.950347 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:24.021054 systemd[1]: Created slice kubepods-besteffort-podc894b613_f774_4e5f_a65e_f4bdf203df3f.slice - libcontainer container kubepods-besteffort-podc894b613_f774_4e5f_a65e_f4bdf203df3f.slice. Feb 13 19:19:24.022455 containerd[1572]: time="2025-02-13T19:19:24.022429394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:0,}" Feb 13 19:19:24.039544 containerd[1572]: time="2025-02-13T19:19:24.039415645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 19:19:24.060322 containerd[1572]: time="2025-02-13T19:19:24.060289050Z" level=error msg="Failed to destroy network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:24.061107 containerd[1572]: time="2025-02-13T19:19:24.061086247Z" level=error msg="encountered an error cleaning up failed sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:24.061163 containerd[1572]: time="2025-02-13T19:19:24.061141203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:24.061792 kubelet[2047]: E0213 19:19:24.061563 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:24.061792 kubelet[2047]: E0213 19:19:24.061613 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:24.061792 kubelet[2047]: E0213 19:19:24.061628 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:24.061751 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600-shm.mount: Deactivated successfully. Feb 13 19:19:24.061937 kubelet[2047]: E0213 19:19:24.061655 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:24.951396 kubelet[2047]: E0213 19:19:24.951358 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:25.040364 kubelet[2047]: I0213 19:19:25.040301 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600" Feb 13 19:19:25.042065 containerd[1572]: time="2025-02-13T19:19:25.040679259Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\"" Feb 13 19:19:25.042065 containerd[1572]: time="2025-02-13T19:19:25.040835373Z" level=info msg="Ensure that sandbox 4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600 in task-service has been cleanup successfully" Feb 13 19:19:25.042130 systemd[1]: run-netns-cni\x2d41ea3337\x2dec75\x2d0a8a\x2d0efe\x2d62008a3ab790.mount: Deactivated successfully. Feb 13 19:19:25.043028 containerd[1572]: time="2025-02-13T19:19:25.042770258Z" level=info msg="TearDown network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" successfully" Feb 13 19:19:25.043028 containerd[1572]: time="2025-02-13T19:19:25.042795821Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" returns successfully" Feb 13 19:19:25.043470 containerd[1572]: time="2025-02-13T19:19:25.043184272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:1,}" Feb 13 19:19:25.076305 containerd[1572]: time="2025-02-13T19:19:25.076280165Z" level=error msg="Failed to destroy network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:25.077456 containerd[1572]: time="2025-02-13T19:19:25.076572549Z" level=error msg="encountered an error cleaning up failed sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:25.077456 containerd[1572]: time="2025-02-13T19:19:25.077412010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:25.077447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a-shm.mount: Deactivated successfully. Feb 13 19:19:25.077585 kubelet[2047]: E0213 19:19:25.077569 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:25.077626 kubelet[2047]: E0213 19:19:25.077608 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:25.077651 kubelet[2047]: E0213 19:19:25.077623 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:25.077670 kubelet[2047]: E0213 19:19:25.077649 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:25.643140 systemd[1]: Created slice kubepods-besteffort-pod9056c3eb_23f5_4ba5_a512_998dfd6e4910.slice - libcontainer container kubepods-besteffort-pod9056c3eb_23f5_4ba5_a512_998dfd6e4910.slice. Feb 13 19:19:25.835014 kubelet[2047]: I0213 19:19:25.834902 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75dmp\" (UniqueName: \"kubernetes.io/projected/9056c3eb-23f5-4ba5-a512-998dfd6e4910-kube-api-access-75dmp\") pod \"nginx-deployment-8587fbcb89-vhbmc\" (UID: \"9056c3eb-23f5-4ba5-a512-998dfd6e4910\") " pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:25.947809 containerd[1572]: time="2025-02-13T19:19:25.947360506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:0,}" Feb 13 19:19:25.952014 kubelet[2047]: E0213 19:19:25.951971 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:26.036199 containerd[1572]: time="2025-02-13T19:19:26.036166092Z" level=error msg="Failed to destroy network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:26.036478 containerd[1572]: time="2025-02-13T19:19:26.036463730Z" level=error msg="encountered an error cleaning up failed sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:26.036549 containerd[1572]: time="2025-02-13T19:19:26.036536962Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:26.036952 kubelet[2047]: E0213 19:19:26.036718 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:26.036952 kubelet[2047]: E0213 19:19:26.036753 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:26.036952 kubelet[2047]: E0213 19:19:26.036766 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:26.037063 kubelet[2047]: E0213 19:19:26.036789 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vhbmc" podUID="9056c3eb-23f5-4ba5-a512-998dfd6e4910" Feb 13 19:19:26.044914 kubelet[2047]: I0213 19:19:26.044899 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c" Feb 13 19:19:26.045842 containerd[1572]: time="2025-02-13T19:19:26.045496326Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\"" Feb 13 19:19:26.045842 containerd[1572]: time="2025-02-13T19:19:26.045670527Z" level=info msg="Ensure that sandbox 956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c in task-service has been cleanup successfully" Feb 13 19:19:26.047817 containerd[1572]: time="2025-02-13T19:19:26.047466262Z" level=info msg="TearDown network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" successfully" Feb 13 19:19:26.047817 containerd[1572]: time="2025-02-13T19:19:26.047485337Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" returns successfully" Feb 13 19:19:26.049376 systemd[1]: run-netns-cni\x2dbee0fac8\x2db365\x2dbd8d\x2da499\x2d42b2c24c66aa.mount: Deactivated successfully. Feb 13 19:19:26.050423 containerd[1572]: time="2025-02-13T19:19:26.049985123Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:1,}" Feb 13 19:19:26.050485 kubelet[2047]: I0213 19:19:26.050071 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a" Feb 13 19:19:26.050930 containerd[1572]: time="2025-02-13T19:19:26.050757108Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\"" Feb 13 19:19:26.051199 containerd[1572]: time="2025-02-13T19:19:26.051141862Z" level=info msg="Ensure that sandbox 8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a in task-service has been cleanup successfully" Feb 13 19:19:26.051346 containerd[1572]: time="2025-02-13T19:19:26.051336132Z" level=info msg="TearDown network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" successfully" Feb 13 19:19:26.051408 containerd[1572]: time="2025-02-13T19:19:26.051379608Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" returns successfully" Feb 13 19:19:26.052243 systemd[1]: run-netns-cni\x2de7fc7d95\x2dab4d\x2de7e4\x2d322f\x2d97e7b0f1d213.mount: Deactivated successfully. Feb 13 19:19:26.052354 containerd[1572]: time="2025-02-13T19:19:26.052334185Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\"" Feb 13 19:19:26.052383 containerd[1572]: time="2025-02-13T19:19:26.052378139Z" level=info msg="TearDown network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" successfully" Feb 13 19:19:26.052405 containerd[1572]: time="2025-02-13T19:19:26.052384536Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" returns successfully" Feb 13 19:19:26.052961 containerd[1572]: time="2025-02-13T19:19:26.052759343Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:2,}" Feb 13 19:19:26.952439 kubelet[2047]: E0213 19:19:26.952394 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:26.991189 containerd[1572]: time="2025-02-13T19:19:26.990337281Z" level=error msg="Failed to destroy network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:26.992274 containerd[1572]: time="2025-02-13T19:19:26.991829131Z" level=error msg="encountered an error cleaning up failed sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:26.992274 containerd[1572]: time="2025-02-13T19:19:26.991885011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:26.992507 kubelet[2047]: E0213 19:19:26.992482 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:26.992591 kubelet[2047]: E0213 19:19:26.992580 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:26.992638 kubelet[2047]: E0213 19:19:26.992626 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:26.992708 kubelet[2047]: E0213 19:19:26.992695 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:27.004911 containerd[1572]: time="2025-02-13T19:19:27.004875933Z" level=error msg="Failed to destroy network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.005094 containerd[1572]: time="2025-02-13T19:19:27.005078005Z" level=error msg="encountered an error cleaning up failed sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.005144 containerd[1572]: time="2025-02-13T19:19:27.005129256Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.005331 kubelet[2047]: E0213 19:19:27.005311 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.005432 kubelet[2047]: E0213 19:19:27.005420 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:27.005493 kubelet[2047]: E0213 19:19:27.005485 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:27.005659 kubelet[2047]: E0213 19:19:27.005614 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vhbmc" podUID="9056c3eb-23f5-4ba5-a512-998dfd6e4910" Feb 13 19:19:27.042496 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538-shm.mount: Deactivated successfully. Feb 13 19:19:27.052847 kubelet[2047]: I0213 19:19:27.052823 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538" Feb 13 19:19:27.053537 containerd[1572]: time="2025-02-13T19:19:27.053260262Z" level=info msg="StopPodSandbox for \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\"" Feb 13 19:19:27.053537 containerd[1572]: time="2025-02-13T19:19:27.053443104Z" level=info msg="Ensure that sandbox 4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538 in task-service has been cleanup successfully" Feb 13 19:19:27.054566 systemd[1]: run-netns-cni\x2dbc1c1606\x2db3a3\x2d52ba\x2d734e\x2de2b0180744dc.mount: Deactivated successfully. Feb 13 19:19:27.055575 containerd[1572]: time="2025-02-13T19:19:27.054923891Z" level=info msg="TearDown network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\" successfully" Feb 13 19:19:27.055575 containerd[1572]: time="2025-02-13T19:19:27.054941607Z" level=info msg="StopPodSandbox for \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\" returns successfully" Feb 13 19:19:27.056443 containerd[1572]: time="2025-02-13T19:19:27.056357963Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\"" Feb 13 19:19:27.056521 kubelet[2047]: I0213 19:19:27.056504 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872" Feb 13 19:19:27.057065 containerd[1572]: time="2025-02-13T19:19:27.056797244Z" level=info msg="StopPodSandbox for \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\"" Feb 13 19:19:27.057065 containerd[1572]: time="2025-02-13T19:19:27.056806888Z" level=info msg="TearDown network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" successfully" Feb 13 19:19:27.057065 containerd[1572]: time="2025-02-13T19:19:27.056944406Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" returns successfully" Feb 13 19:19:27.057065 containerd[1572]: time="2025-02-13T19:19:27.056970978Z" level=info msg="Ensure that sandbox 4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872 in task-service has been cleanup successfully" Feb 13 19:19:27.057974 systemd[1]: run-netns-cni\x2db02157c5\x2dfed1\x2d2560\x2dd5d5\x2d13eda1de3d3f.mount: Deactivated successfully. Feb 13 19:19:27.058165 containerd[1572]: time="2025-02-13T19:19:27.058122390Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\"" Feb 13 19:19:27.058205 containerd[1572]: time="2025-02-13T19:19:27.058165025Z" level=info msg="TearDown network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" successfully" Feb 13 19:19:27.058205 containerd[1572]: time="2025-02-13T19:19:27.058171524Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" returns successfully" Feb 13 19:19:27.059039 containerd[1572]: time="2025-02-13T19:19:27.058539819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:3,}" Feb 13 19:19:27.059485 containerd[1572]: time="2025-02-13T19:19:27.059427713Z" level=info msg="TearDown network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\" successfully" Feb 13 19:19:27.059485 containerd[1572]: time="2025-02-13T19:19:27.059442648Z" level=info msg="StopPodSandbox for \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\" returns successfully" Feb 13 19:19:27.060017 containerd[1572]: time="2025-02-13T19:19:27.060007169Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\"" Feb 13 19:19:27.060363 containerd[1572]: time="2025-02-13T19:19:27.060344724Z" level=info msg="TearDown network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" successfully" Feb 13 19:19:27.060363 containerd[1572]: time="2025-02-13T19:19:27.060357706Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" returns successfully" Feb 13 19:19:27.061351 containerd[1572]: time="2025-02-13T19:19:27.061336098Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:2,}" Feb 13 19:19:27.144367 containerd[1572]: time="2025-02-13T19:19:27.144334369Z" level=error msg="Failed to destroy network for sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.145106 containerd[1572]: time="2025-02-13T19:19:27.145004944Z" level=error msg="encountered an error cleaning up failed sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.145106 containerd[1572]: time="2025-02-13T19:19:27.145053328Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.145451 kubelet[2047]: E0213 19:19:27.145198 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.145451 kubelet[2047]: E0213 19:19:27.145239 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:27.145451 kubelet[2047]: E0213 19:19:27.145252 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:27.145514 kubelet[2047]: E0213 19:19:27.145287 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:27.151130 containerd[1572]: time="2025-02-13T19:19:27.151004289Z" level=error msg="Failed to destroy network for sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.151563 containerd[1572]: time="2025-02-13T19:19:27.151475442Z" level=error msg="encountered an error cleaning up failed sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.151563 containerd[1572]: time="2025-02-13T19:19:27.151512315Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.151801 kubelet[2047]: E0213 19:19:27.151683 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:27.151801 kubelet[2047]: E0213 19:19:27.151726 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:27.151801 kubelet[2047]: E0213 19:19:27.151739 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:27.151873 kubelet[2047]: E0213 19:19:27.151764 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vhbmc" podUID="9056c3eb-23f5-4ba5-a512-998dfd6e4910" Feb 13 19:19:27.953397 kubelet[2047]: E0213 19:19:27.953366 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:28.042354 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5-shm.mount: Deactivated successfully. Feb 13 19:19:28.058653 kubelet[2047]: I0213 19:19:28.058634 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5" Feb 13 19:19:28.059204 containerd[1572]: time="2025-02-13T19:19:28.059080164Z" level=info msg="StopPodSandbox for \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\"" Feb 13 19:19:28.064124 kubelet[2047]: I0213 19:19:28.064001 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400" Feb 13 19:19:28.065237 containerd[1572]: time="2025-02-13T19:19:28.065179720Z" level=info msg="StopPodSandbox for \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\"" Feb 13 19:19:28.071270 containerd[1572]: time="2025-02-13T19:19:28.071240819Z" level=info msg="Ensure that sandbox 7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5 in task-service has been cleanup successfully" Feb 13 19:19:28.072769 containerd[1572]: time="2025-02-13T19:19:28.071452411Z" level=info msg="Ensure that sandbox c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400 in task-service has been cleanup successfully" Feb 13 19:19:28.072447 systemd[1]: run-netns-cni\x2dc8dbf02e\x2d734e\x2d69a1\x2dcff8\x2d9454659cad27.mount: Deactivated successfully. Feb 13 19:19:28.073025 containerd[1572]: time="2025-02-13T19:19:28.072803523Z" level=info msg="TearDown network for sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\" successfully" Feb 13 19:19:28.073025 containerd[1572]: time="2025-02-13T19:19:28.072815587Z" level=info msg="StopPodSandbox for \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\" returns successfully" Feb 13 19:19:28.073338 containerd[1572]: time="2025-02-13T19:19:28.073146438Z" level=info msg="StopPodSandbox for \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\"" Feb 13 19:19:28.073338 containerd[1572]: time="2025-02-13T19:19:28.073188469Z" level=info msg="TearDown network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\" successfully" Feb 13 19:19:28.073338 containerd[1572]: time="2025-02-13T19:19:28.073194774Z" level=info msg="StopPodSandbox for \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\" returns successfully" Feb 13 19:19:28.073338 containerd[1572]: time="2025-02-13T19:19:28.073150242Z" level=info msg="TearDown network for sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\" successfully" Feb 13 19:19:28.073338 containerd[1572]: time="2025-02-13T19:19:28.073214173Z" level=info msg="StopPodSandbox for \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\" returns successfully" Feb 13 19:19:28.074226 containerd[1572]: time="2025-02-13T19:19:28.074207068Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\"" Feb 13 19:19:28.074277 containerd[1572]: time="2025-02-13T19:19:28.074245547Z" level=info msg="TearDown network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" successfully" Feb 13 19:19:28.074306 containerd[1572]: time="2025-02-13T19:19:28.074274941Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" returns successfully" Feb 13 19:19:28.074452 containerd[1572]: time="2025-02-13T19:19:28.074441447Z" level=info msg="StopPodSandbox for \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\"" Feb 13 19:19:28.074802 containerd[1572]: time="2025-02-13T19:19:28.074769533Z" level=info msg="TearDown network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\" successfully" Feb 13 19:19:28.074802 containerd[1572]: time="2025-02-13T19:19:28.074778890Z" level=info msg="StopPodSandbox for \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\" returns successfully" Feb 13 19:19:28.074875 systemd[1]: run-netns-cni\x2d5ab0fb19\x2d5723\x2defa5\x2d984f\x2d1dcbe5c9cd97.mount: Deactivated successfully. Feb 13 19:19:28.075599 containerd[1572]: time="2025-02-13T19:19:28.075506057Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\"" Feb 13 19:19:28.075599 containerd[1572]: time="2025-02-13T19:19:28.075559189Z" level=info msg="TearDown network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" successfully" Feb 13 19:19:28.075599 containerd[1572]: time="2025-02-13T19:19:28.075566785Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" returns successfully" Feb 13 19:19:28.075721 containerd[1572]: time="2025-02-13T19:19:28.075713339Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\"" Feb 13 19:19:28.075829 containerd[1572]: time="2025-02-13T19:19:28.075820782Z" level=info msg="TearDown network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" successfully" Feb 13 19:19:28.075894 containerd[1572]: time="2025-02-13T19:19:28.075855002Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" returns successfully" Feb 13 19:19:28.076178 containerd[1572]: time="2025-02-13T19:19:28.076164146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:4,}" Feb 13 19:19:28.076271 containerd[1572]: time="2025-02-13T19:19:28.076232348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:3,}" Feb 13 19:19:28.685004 containerd[1572]: time="2025-02-13T19:19:28.684662929Z" level=error msg="Failed to destroy network for sandbox \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:28.685004 containerd[1572]: time="2025-02-13T19:19:28.684880503Z" level=error msg="encountered an error cleaning up failed sandbox \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:28.685004 containerd[1572]: time="2025-02-13T19:19:28.684920899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:28.685253 kubelet[2047]: E0213 19:19:28.685217 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:28.685299 kubelet[2047]: E0213 19:19:28.685258 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:28.685299 kubelet[2047]: E0213 19:19:28.685272 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:28.685336 kubelet[2047]: E0213 19:19:28.685298 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:28.715189 containerd[1572]: time="2025-02-13T19:19:28.715164039Z" level=error msg="Failed to destroy network for sandbox \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:28.715491 containerd[1572]: time="2025-02-13T19:19:28.715466635Z" level=error msg="encountered an error cleaning up failed sandbox \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:28.715603 containerd[1572]: time="2025-02-13T19:19:28.715546707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:28.715796 kubelet[2047]: E0213 19:19:28.715772 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:28.715858 kubelet[2047]: E0213 19:19:28.715810 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:28.715858 kubelet[2047]: E0213 19:19:28.715824 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:28.715858 kubelet[2047]: E0213 19:19:28.715851 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vhbmc" podUID="9056c3eb-23f5-4ba5-a512-998dfd6e4910" Feb 13 19:19:28.953887 kubelet[2047]: E0213 19:19:28.953782 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:29.042424 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905-shm.mount: Deactivated successfully. Feb 13 19:19:29.044186 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc-shm.mount: Deactivated successfully. Feb 13 19:19:29.044229 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2598653179.mount: Deactivated successfully. Feb 13 19:19:29.066528 kubelet[2047]: I0213 19:19:29.066510 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc" Feb 13 19:19:29.067336 containerd[1572]: time="2025-02-13T19:19:29.066923104Z" level=info msg="StopPodSandbox for \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\"" Feb 13 19:19:29.067336 containerd[1572]: time="2025-02-13T19:19:29.067040254Z" level=info msg="Ensure that sandbox 5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc in task-service has been cleanup successfully" Feb 13 19:19:29.068200 containerd[1572]: time="2025-02-13T19:19:29.068183137Z" level=info msg="TearDown network for sandbox \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\" successfully" Feb 13 19:19:29.068200 containerd[1572]: time="2025-02-13T19:19:29.068196853Z" level=info msg="StopPodSandbox for \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\" returns successfully" Feb 13 19:19:29.068484 systemd[1]: run-netns-cni\x2da4270952\x2dc6b7\x2d12b9\x2d9e32\x2d63ee8162a7b4.mount: Deactivated successfully. Feb 13 19:19:29.068562 containerd[1572]: time="2025-02-13T19:19:29.068481389Z" level=info msg="StopPodSandbox for \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\"" Feb 13 19:19:29.068562 containerd[1572]: time="2025-02-13T19:19:29.068520106Z" level=info msg="TearDown network for sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\" successfully" Feb 13 19:19:29.068562 containerd[1572]: time="2025-02-13T19:19:29.068525870Z" level=info msg="StopPodSandbox for \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\" returns successfully" Feb 13 19:19:29.068713 containerd[1572]: time="2025-02-13T19:19:29.068700071Z" level=info msg="StopPodSandbox for \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\"" Feb 13 19:19:29.068747 containerd[1572]: time="2025-02-13T19:19:29.068736315Z" level=info msg="TearDown network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\" successfully" Feb 13 19:19:29.068747 containerd[1572]: time="2025-02-13T19:19:29.068745494Z" level=info msg="StopPodSandbox for \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\" returns successfully" Feb 13 19:19:29.069110 containerd[1572]: time="2025-02-13T19:19:29.068886762Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\"" Feb 13 19:19:29.069110 containerd[1572]: time="2025-02-13T19:19:29.068919312Z" level=info msg="TearDown network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" successfully" Feb 13 19:19:29.069110 containerd[1572]: time="2025-02-13T19:19:29.068925163Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" returns successfully" Feb 13 19:19:29.069730 containerd[1572]: time="2025-02-13T19:19:29.069593839Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\"" Feb 13 19:19:29.069730 containerd[1572]: time="2025-02-13T19:19:29.069630699Z" level=info msg="TearDown network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" successfully" Feb 13 19:19:29.069730 containerd[1572]: time="2025-02-13T19:19:29.069636913Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" returns successfully" Feb 13 19:19:29.070056 containerd[1572]: time="2025-02-13T19:19:29.069899854Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:5,}" Feb 13 19:19:29.070433 kubelet[2047]: I0213 19:19:29.070267 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905" Feb 13 19:19:29.071725 containerd[1572]: time="2025-02-13T19:19:29.070512605Z" level=info msg="StopPodSandbox for \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\"" Feb 13 19:19:29.071725 containerd[1572]: time="2025-02-13T19:19:29.070603438Z" level=info msg="Ensure that sandbox 8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905 in task-service has been cleanup successfully" Feb 13 19:19:29.071491 systemd[1]: run-netns-cni\x2dca0749f7\x2ddf24\x2d5183\x2dccaf\x2d9960f0f82feb.mount: Deactivated successfully. Feb 13 19:19:29.071894 containerd[1572]: time="2025-02-13T19:19:29.071878689Z" level=info msg="TearDown network for sandbox \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\" successfully" Feb 13 19:19:29.071894 containerd[1572]: time="2025-02-13T19:19:29.071890079Z" level=info msg="StopPodSandbox for \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\" returns successfully" Feb 13 19:19:29.072051 containerd[1572]: time="2025-02-13T19:19:29.072034247Z" level=info msg="StopPodSandbox for \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\"" Feb 13 19:19:29.072243 containerd[1572]: time="2025-02-13T19:19:29.072073435Z" level=info msg="TearDown network for sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\" successfully" Feb 13 19:19:29.072243 containerd[1572]: time="2025-02-13T19:19:29.072079110Z" level=info msg="StopPodSandbox for \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\" returns successfully" Feb 13 19:19:29.072443 containerd[1572]: time="2025-02-13T19:19:29.072371883Z" level=info msg="StopPodSandbox for \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\"" Feb 13 19:19:29.072443 containerd[1572]: time="2025-02-13T19:19:29.072409204Z" level=info msg="TearDown network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\" successfully" Feb 13 19:19:29.072443 containerd[1572]: time="2025-02-13T19:19:29.072416450Z" level=info msg="StopPodSandbox for \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\" returns successfully" Feb 13 19:19:29.072893 containerd[1572]: time="2025-02-13T19:19:29.072878469Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\"" Feb 13 19:19:29.072925 containerd[1572]: time="2025-02-13T19:19:29.072917067Z" level=info msg="TearDown network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" successfully" Feb 13 19:19:29.072925 containerd[1572]: time="2025-02-13T19:19:29.072922746Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" returns successfully" Feb 13 19:19:29.073271 containerd[1572]: time="2025-02-13T19:19:29.073254951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:4,}" Feb 13 19:19:29.091565 containerd[1572]: time="2025-02-13T19:19:29.091534716Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:29.093112 containerd[1572]: time="2025-02-13T19:19:29.092939054Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 19:19:29.093112 containerd[1572]: time="2025-02-13T19:19:29.092992133Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:29.095162 containerd[1572]: time="2025-02-13T19:19:29.095091080Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:29.095516 containerd[1572]: time="2025-02-13T19:19:29.095500618Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 5.056063677s" Feb 13 19:19:29.095554 containerd[1572]: time="2025-02-13T19:19:29.095517501Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 19:19:29.119555 containerd[1572]: time="2025-02-13T19:19:29.119453882Z" level=info msg="CreateContainer within sandbox \"f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 19:19:29.130374 containerd[1572]: time="2025-02-13T19:19:29.130331119Z" level=info msg="CreateContainer within sandbox \"f87901ebfa2157787a55594e5f5663bf855f06964cad8b6ba5f8abe45dba6ceb\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e47997b0607d57e83943c7cfb88245d325dce91cdeee935f5c8fa0428c031563\"" Feb 13 19:19:29.130962 containerd[1572]: time="2025-02-13T19:19:29.130929778Z" level=info msg="StartContainer for \"e47997b0607d57e83943c7cfb88245d325dce91cdeee935f5c8fa0428c031563\"" Feb 13 19:19:29.147324 containerd[1572]: time="2025-02-13T19:19:29.146917681Z" level=error msg="Failed to destroy network for sandbox \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:29.147324 containerd[1572]: time="2025-02-13T19:19:29.147218733Z" level=error msg="encountered an error cleaning up failed sandbox \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:29.147324 containerd[1572]: time="2025-02-13T19:19:29.147254475Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:29.148096 kubelet[2047]: E0213 19:19:29.147625 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:29.148096 kubelet[2047]: E0213 19:19:29.147672 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:29.148096 kubelet[2047]: E0213 19:19:29.147686 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-vhbmc" Feb 13 19:19:29.148229 kubelet[2047]: E0213 19:19:29.147714 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-vhbmc_default(9056c3eb-23f5-4ba5-a512-998dfd6e4910)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-vhbmc" podUID="9056c3eb-23f5-4ba5-a512-998dfd6e4910" Feb 13 19:19:29.162217 containerd[1572]: time="2025-02-13T19:19:29.162186598Z" level=error msg="Failed to destroy network for sandbox \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:29.162508 containerd[1572]: time="2025-02-13T19:19:29.162492067Z" level=error msg="encountered an error cleaning up failed sandbox \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:29.162620 containerd[1572]: time="2025-02-13T19:19:29.162592564Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:29.163016 kubelet[2047]: E0213 19:19:29.162775 2047 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 19:19:29.163016 kubelet[2047]: E0213 19:19:29.162816 2047 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:29.163016 kubelet[2047]: E0213 19:19:29.162829 2047 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lgcrw" Feb 13 19:19:29.163102 kubelet[2047]: E0213 19:19:29.162862 2047 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lgcrw_calico-system(c894b613-f774-4e5f-a65e-f4bdf203df3f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lgcrw" podUID="c894b613-f774-4e5f-a65e-f4bdf203df3f" Feb 13 19:19:29.192222 systemd[1]: Started cri-containerd-e47997b0607d57e83943c7cfb88245d325dce91cdeee935f5c8fa0428c031563.scope - libcontainer container e47997b0607d57e83943c7cfb88245d325dce91cdeee935f5c8fa0428c031563. Feb 13 19:19:29.211903 containerd[1572]: time="2025-02-13T19:19:29.211767661Z" level=info msg="StartContainer for \"e47997b0607d57e83943c7cfb88245d325dce91cdeee935f5c8fa0428c031563\" returns successfully" Feb 13 19:19:29.256270 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 19:19:29.256352 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 19:19:29.953948 kubelet[2047]: E0213 19:19:29.953901 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:30.045555 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe-shm.mount: Deactivated successfully. Feb 13 19:19:30.045643 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271-shm.mount: Deactivated successfully. Feb 13 19:19:30.072953 kubelet[2047]: I0213 19:19:30.072867 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe" Feb 13 19:19:30.073451 containerd[1572]: time="2025-02-13T19:19:30.073324084Z" level=info msg="StopPodSandbox for \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\"" Feb 13 19:19:30.074388 containerd[1572]: time="2025-02-13T19:19:30.073652500Z" level=info msg="Ensure that sandbox 44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe in task-service has been cleanup successfully" Feb 13 19:19:30.075325 systemd[1]: run-netns-cni\x2d43cb9c65\x2d73a4\x2d6126\x2d01a3\x2d1bff534d6a93.mount: Deactivated successfully. Feb 13 19:19:30.076209 containerd[1572]: time="2025-02-13T19:19:30.075840653Z" level=info msg="TearDown network for sandbox \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\" successfully" Feb 13 19:19:30.076209 containerd[1572]: time="2025-02-13T19:19:30.075856658Z" level=info msg="StopPodSandbox for \"44a5973e2c1b0ddeadfebdf9d4c9acf587acda04eab4ac63722b13f0401d0ebe\" returns successfully" Feb 13 19:19:30.076368 containerd[1572]: time="2025-02-13T19:19:30.076345448Z" level=info msg="StopPodSandbox for \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\"" Feb 13 19:19:30.076472 containerd[1572]: time="2025-02-13T19:19:30.076408322Z" level=info msg="TearDown network for sandbox \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\" successfully" Feb 13 19:19:30.076472 containerd[1572]: time="2025-02-13T19:19:30.076416930Z" level=info msg="StopPodSandbox for \"5d33d9203d01eb50880e5696d0ec55e4e34dcc9c3d135ae71af2db049375b8fc\" returns successfully" Feb 13 19:19:30.076601 containerd[1572]: time="2025-02-13T19:19:30.076576394Z" level=info msg="StopPodSandbox for \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\"" Feb 13 19:19:30.076620 containerd[1572]: time="2025-02-13T19:19:30.076608397Z" level=info msg="TearDown network for sandbox \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\" successfully" Feb 13 19:19:30.076620 containerd[1572]: time="2025-02-13T19:19:30.076613932Z" level=info msg="StopPodSandbox for \"7ae6139e6ea7638fc5c30c6b8e2eebdbc3c59441538354d9238f742c74fefba5\" returns successfully" Feb 13 19:19:30.077272 containerd[1572]: time="2025-02-13T19:19:30.077258747Z" level=info msg="StopPodSandbox for \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\"" Feb 13 19:19:30.077309 containerd[1572]: time="2025-02-13T19:19:30.077295698Z" level=info msg="TearDown network for sandbox \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\" successfully" Feb 13 19:19:30.077309 containerd[1572]: time="2025-02-13T19:19:30.077301856Z" level=info msg="StopPodSandbox for \"4a19d66c3127313c5bdf2f4e04ac8eba11a814c3bf79cb06f92e5b84eabaa538\" returns successfully" Feb 13 19:19:30.077637 kubelet[2047]: I0213 19:19:30.077444 2047 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271" Feb 13 19:19:30.077932 containerd[1572]: time="2025-02-13T19:19:30.077738570Z" level=info msg="StopPodSandbox for \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\"" Feb 13 19:19:30.077932 containerd[1572]: time="2025-02-13T19:19:30.077775932Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\"" Feb 13 19:19:30.077932 containerd[1572]: time="2025-02-13T19:19:30.077811809Z" level=info msg="TearDown network for sandbox \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" successfully" Feb 13 19:19:30.077932 containerd[1572]: time="2025-02-13T19:19:30.077817266Z" level=info msg="StopPodSandbox for \"8f70f7ae6f7bb909467842173be97f96ca0dd02d0a22d110e932cc8eac17b02a\" returns successfully" Feb 13 19:19:30.077932 containerd[1572]: time="2025-02-13T19:19:30.077855493Z" level=info msg="Ensure that sandbox dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271 in task-service has been cleanup successfully" Feb 13 19:19:30.079275 containerd[1572]: time="2025-02-13T19:19:30.078302709Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\"" Feb 13 19:19:30.079275 containerd[1572]: time="2025-02-13T19:19:30.078341912Z" level=info msg="TearDown network for sandbox \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" successfully" Feb 13 19:19:30.079275 containerd[1572]: time="2025-02-13T19:19:30.078348821Z" level=info msg="StopPodSandbox for \"4bfb03cd633058d9b8bb5a68065f94a4b34888367f9264caad1d5cfc41831600\" returns successfully" Feb 13 19:19:30.079275 containerd[1572]: time="2025-02-13T19:19:30.078371901Z" level=info msg="TearDown network for sandbox \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\" successfully" Feb 13 19:19:30.079275 containerd[1572]: time="2025-02-13T19:19:30.078377698Z" level=info msg="StopPodSandbox for \"dd142845f4a99e691b3e8f95f02a7ecbe8095bcceb254bbc1b4947708e08f271\" returns successfully" Feb 13 19:19:30.079104 systemd[1]: run-netns-cni\x2dda93fe5e\x2ded5e\x2de119\x2d0137\x2df03be9514820.mount: Deactivated successfully. Feb 13 19:19:30.080391 containerd[1572]: time="2025-02-13T19:19:30.080372503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:6,}" Feb 13 19:19:30.080582 containerd[1572]: time="2025-02-13T19:19:30.080374663Z" level=info msg="StopPodSandbox for \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\"" Feb 13 19:19:30.080693 containerd[1572]: time="2025-02-13T19:19:30.080680790Z" level=info msg="TearDown network for sandbox \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\" successfully" Feb 13 19:19:30.080773 containerd[1572]: time="2025-02-13T19:19:30.080761813Z" level=info msg="StopPodSandbox for \"8243b0cbfceb93ac9151ed48cfb3467e8c17f0c40223a3303634ef3c89609905\" returns successfully" Feb 13 19:19:30.080963 containerd[1572]: time="2025-02-13T19:19:30.080953067Z" level=info msg="StopPodSandbox for \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\"" Feb 13 19:19:30.081053 containerd[1572]: time="2025-02-13T19:19:30.081044602Z" level=info msg="TearDown network for sandbox \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\" successfully" Feb 13 19:19:30.081137 containerd[1572]: time="2025-02-13T19:19:30.081095988Z" level=info msg="StopPodSandbox for \"c5c40403fb0af3392c7a6aabe3e433a6770fc0e2712d756db073ec04c2b94400\" returns successfully" Feb 13 19:19:30.081476 containerd[1572]: time="2025-02-13T19:19:30.081411437Z" level=info msg="StopPodSandbox for \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\"" Feb 13 19:19:30.081476 containerd[1572]: time="2025-02-13T19:19:30.081449271Z" level=info msg="TearDown network for sandbox \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\" successfully" Feb 13 19:19:30.081476 containerd[1572]: time="2025-02-13T19:19:30.081455850Z" level=info msg="StopPodSandbox for \"4eb407035bdc57cf836442f492eeca1785f44abfad2e0f6741eb7c0cc55a4872\" returns successfully" Feb 13 19:19:30.081958 containerd[1572]: time="2025-02-13T19:19:30.081852670Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\"" Feb 13 19:19:30.081958 containerd[1572]: time="2025-02-13T19:19:30.081921728Z" level=info msg="TearDown network for sandbox \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" successfully" Feb 13 19:19:30.081958 containerd[1572]: time="2025-02-13T19:19:30.081929301Z" level=info msg="StopPodSandbox for \"956b936b7687be0188926c86e0ea3ede21fb5f0cb267509c96dd2d0b2e40f83c\" returns successfully" Feb 13 19:19:30.082499 containerd[1572]: time="2025-02-13T19:19:30.082277660Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:5,}" Feb 13 19:19:30.110500 kubelet[2047]: I0213 19:19:30.110466 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-2658l" podStartSLOduration=3.016047761 podStartE2EDuration="17.110455055s" podCreationTimestamp="2025-02-13 19:19:13 +0000 UTC" firstStartedPulling="2025-02-13 19:19:15.005184388 +0000 UTC m=+2.456326744" lastFinishedPulling="2025-02-13 19:19:29.099591682 +0000 UTC m=+16.550734038" observedRunningTime="2025-02-13 19:19:30.110384304 +0000 UTC m=+17.561526670" watchObservedRunningTime="2025-02-13 19:19:30.110455055 +0000 UTC m=+17.561597415" Feb 13 19:19:30.533181 kernel: bpftool[3042]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 19:19:30.713504 systemd-networkd[1260]: vxlan.calico: Link UP Feb 13 19:19:30.713511 systemd-networkd[1260]: vxlan.calico: Gained carrier Feb 13 19:19:30.954473 kubelet[2047]: E0213 19:19:30.954393 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:31.471800 systemd-networkd[1260]: calib80f72cb17c: Link UP Feb 13 19:19:31.472052 systemd-networkd[1260]: calib80f72cb17c: Gained carrier Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:30.271 [INFO][2921] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:30.562 [INFO][2921] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.124.142-k8s-csi--node--driver--lgcrw-eth0 csi-node-driver- calico-system c894b613-f774-4e5f-a65e-f4bdf203df3f 1003 0 2025-02-13 19:19:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.67.124.142 csi-node-driver-lgcrw eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib80f72cb17c [] []}} ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Namespace="calico-system" Pod="csi-node-driver-lgcrw" WorkloadEndpoint="10.67.124.142-k8s-csi--node--driver--lgcrw-" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:30.562 [INFO][2921] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Namespace="calico-system" Pod="csi-node-driver-lgcrw" WorkloadEndpoint="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.331 [INFO][3045] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" HandleID="k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Workload="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.444 [INFO][3045] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" HandleID="k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Workload="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000406f60), Attrs:map[string]string{"namespace":"calico-system", "node":"10.67.124.142", "pod":"csi-node-driver-lgcrw", "timestamp":"2025-02-13 19:19:31.331417163 +0000 UTC"}, Hostname:"10.67.124.142", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.444 [INFO][3045] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.444 [INFO][3045] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.444 [INFO][3045] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.124.142' Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.446 [INFO][3045] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.452 [INFO][3045] ipam/ipam.go 372: Looking up existing affinities for host host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.455 [INFO][3045] ipam/ipam.go 489: Trying affinity for 192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.456 [INFO][3045] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.458 [INFO][3045] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.458 [INFO][3045] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.459 [INFO][3045] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.462 [INFO][3045] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.466 [INFO][3045] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.1/26] block=192.168.47.0/26 handle="k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.466 [INFO][3045] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.1/26] handle="k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" host="10.67.124.142" Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.466 [INFO][3045] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:19:31.480942 containerd[1572]: 2025-02-13 19:19:31.466 [INFO][3045] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.1/26] IPv6=[] ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" HandleID="k8s-pod-network.dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Workload="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" Feb 13 19:19:31.481994 containerd[1572]: 2025-02-13 19:19:31.467 [INFO][2921] cni-plugin/k8s.go 386: Populated endpoint ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Namespace="calico-system" Pod="csi-node-driver-lgcrw" WorkloadEndpoint="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.142-k8s-csi--node--driver--lgcrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c894b613-f774-4e5f-a65e-f4bdf203df3f", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.124.142", ContainerID:"", Pod:"csi-node-driver-lgcrw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib80f72cb17c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:19:31.481994 containerd[1572]: 2025-02-13 19:19:31.467 [INFO][2921] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.1/32] ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Namespace="calico-system" Pod="csi-node-driver-lgcrw" WorkloadEndpoint="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" Feb 13 19:19:31.481994 containerd[1572]: 2025-02-13 19:19:31.467 [INFO][2921] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib80f72cb17c ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Namespace="calico-system" Pod="csi-node-driver-lgcrw" WorkloadEndpoint="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" Feb 13 19:19:31.481994 containerd[1572]: 2025-02-13 19:19:31.472 [INFO][2921] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Namespace="calico-system" Pod="csi-node-driver-lgcrw" WorkloadEndpoint="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" Feb 13 19:19:31.481994 containerd[1572]: 2025-02-13 19:19:31.472 [INFO][2921] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Namespace="calico-system" Pod="csi-node-driver-lgcrw" WorkloadEndpoint="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.142-k8s-csi--node--driver--lgcrw-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c894b613-f774-4e5f-a65e-f4bdf203df3f", ResourceVersion:"1003", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.124.142", ContainerID:"dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de", Pod:"csi-node-driver-lgcrw", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.47.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib80f72cb17c", MAC:"b2:4a:9f:4a:bb:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:19:31.481994 containerd[1572]: 2025-02-13 19:19:31.479 [INFO][2921] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de" Namespace="calico-system" Pod="csi-node-driver-lgcrw" WorkloadEndpoint="10.67.124.142-k8s-csi--node--driver--lgcrw-eth0" Feb 13 19:19:31.497197 containerd[1572]: time="2025-02-13T19:19:31.497072230Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:31.497328 containerd[1572]: time="2025-02-13T19:19:31.497110779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:31.497328 containerd[1572]: time="2025-02-13T19:19:31.497192437Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:31.497328 containerd[1572]: time="2025-02-13T19:19:31.497256961Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:31.512304 systemd[1]: Started cri-containerd-dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de.scope - libcontainer container dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de. Feb 13 19:19:31.520086 systemd-resolved[1487]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:19:31.526788 containerd[1572]: time="2025-02-13T19:19:31.526761130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lgcrw,Uid:c894b613-f774-4e5f-a65e-f4bdf203df3f,Namespace:calico-system,Attempt:6,} returns sandbox id \"dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de\"" Feb 13 19:19:31.527632 containerd[1572]: time="2025-02-13T19:19:31.527590622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 19:19:31.570982 systemd-networkd[1260]: calie8cd9dd2685: Link UP Feb 13 19:19:31.571539 systemd-networkd[1260]: calie8cd9dd2685: Gained carrier Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:30.285 [INFO][2930] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:30.564 [INFO][2930] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0 nginx-deployment-8587fbcb89- default 9056c3eb-23f5-4ba5-a512-998dfd6e4910 1077 0 2025-02-13 19:19:25 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.67.124.142 nginx-deployment-8587fbcb89-vhbmc eth0 default [] [] [kns.default ksa.default.default] calie8cd9dd2685 [] []}} ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhbmc" WorkloadEndpoint="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:30.564 [INFO][2930] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhbmc" WorkloadEndpoint="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.331 [INFO][3046] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" HandleID="k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Workload="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.446 [INFO][3046] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" HandleID="k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Workload="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051120), Attrs:map[string]string{"namespace":"default", "node":"10.67.124.142", "pod":"nginx-deployment-8587fbcb89-vhbmc", "timestamp":"2025-02-13 19:19:31.331366466 +0000 UTC"}, Hostname:"10.67.124.142", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.446 [INFO][3046] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.466 [INFO][3046] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.466 [INFO][3046] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.124.142' Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.547 [INFO][3046] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.551 [INFO][3046] ipam/ipam.go 372: Looking up existing affinities for host host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.557 [INFO][3046] ipam/ipam.go 489: Trying affinity for 192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.559 [INFO][3046] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.560 [INFO][3046] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.560 [INFO][3046] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.561 [INFO][3046] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8 Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.564 [INFO][3046] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.567 [INFO][3046] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.2/26] block=192.168.47.0/26 handle="k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.567 [INFO][3046] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.2/26] handle="k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" host="10.67.124.142" Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.567 [INFO][3046] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:19:31.578322 containerd[1572]: 2025-02-13 19:19:31.567 [INFO][3046] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.2/26] IPv6=[] ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" HandleID="k8s-pod-network.a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Workload="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" Feb 13 19:19:31.578773 containerd[1572]: 2025-02-13 19:19:31.569 [INFO][2930] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhbmc" WorkloadEndpoint="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"9056c3eb-23f5-4ba5-a512-998dfd6e4910", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.124.142", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-vhbmc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.47.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie8cd9dd2685", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:19:31.578773 containerd[1572]: 2025-02-13 19:19:31.569 [INFO][2930] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.2/32] ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhbmc" WorkloadEndpoint="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" Feb 13 19:19:31.578773 containerd[1572]: 2025-02-13 19:19:31.569 [INFO][2930] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie8cd9dd2685 ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhbmc" WorkloadEndpoint="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" Feb 13 19:19:31.578773 containerd[1572]: 2025-02-13 19:19:31.571 [INFO][2930] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhbmc" WorkloadEndpoint="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" Feb 13 19:19:31.578773 containerd[1572]: 2025-02-13 19:19:31.572 [INFO][2930] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhbmc" WorkloadEndpoint="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"9056c3eb-23f5-4ba5-a512-998dfd6e4910", ResourceVersion:"1077", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.124.142", ContainerID:"a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8", Pod:"nginx-deployment-8587fbcb89-vhbmc", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.47.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie8cd9dd2685", MAC:"d2:b9:6f:4b:4a:7a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:19:31.578773 containerd[1572]: 2025-02-13 19:19:31.577 [INFO][2930] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8" Namespace="default" Pod="nginx-deployment-8587fbcb89-vhbmc" WorkloadEndpoint="10.67.124.142-k8s-nginx--deployment--8587fbcb89--vhbmc-eth0" Feb 13 19:19:31.593832 containerd[1572]: time="2025-02-13T19:19:31.593654005Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:31.593832 containerd[1572]: time="2025-02-13T19:19:31.593711126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:31.593832 containerd[1572]: time="2025-02-13T19:19:31.593722100Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:31.594108 containerd[1572]: time="2025-02-13T19:19:31.594063577Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:31.616281 systemd[1]: Started cri-containerd-a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8.scope - libcontainer container a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8. Feb 13 19:19:31.624730 systemd-resolved[1487]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:19:31.646375 containerd[1572]: time="2025-02-13T19:19:31.646351733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-vhbmc,Uid:9056c3eb-23f5-4ba5-a512-998dfd6e4910,Namespace:default,Attempt:5,} returns sandbox id \"a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8\"" Feb 13 19:19:31.949345 systemd-networkd[1260]: vxlan.calico: Gained IPv6LL Feb 13 19:19:31.955338 kubelet[2047]: E0213 19:19:31.955309 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:32.589265 systemd-networkd[1260]: calib80f72cb17c: Gained IPv6LL Feb 13 19:19:32.937015 kubelet[2047]: E0213 19:19:32.936932 2047 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:32.955433 kubelet[2047]: E0213 19:19:32.955402 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:33.321507 containerd[1572]: time="2025-02-13T19:19:33.321477197Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:33.325028 containerd[1572]: time="2025-02-13T19:19:33.324991209Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 19:19:33.341415 containerd[1572]: time="2025-02-13T19:19:33.341364793Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:33.342435 containerd[1572]: time="2025-02-13T19:19:33.342410897Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:33.342880 containerd[1572]: time="2025-02-13T19:19:33.342790725Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.815178252s" Feb 13 19:19:33.342880 containerd[1572]: time="2025-02-13T19:19:33.342808379Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 19:19:33.343885 containerd[1572]: time="2025-02-13T19:19:33.343862985Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:19:33.346239 containerd[1572]: time="2025-02-13T19:19:33.345895327Z" level=info msg="CreateContainer within sandbox \"dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 19:19:33.356948 containerd[1572]: time="2025-02-13T19:19:33.356921982Z" level=info msg="CreateContainer within sandbox \"dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9ccfa0912590012d4bdc549da28ed609b5fc920db47c3576638b9d7c214efda5\"" Feb 13 19:19:33.357742 containerd[1572]: time="2025-02-13T19:19:33.357729071Z" level=info msg="StartContainer for \"9ccfa0912590012d4bdc549da28ed609b5fc920db47c3576638b9d7c214efda5\"" Feb 13 19:19:33.381294 systemd[1]: Started cri-containerd-9ccfa0912590012d4bdc549da28ed609b5fc920db47c3576638b9d7c214efda5.scope - libcontainer container 9ccfa0912590012d4bdc549da28ed609b5fc920db47c3576638b9d7c214efda5. Feb 13 19:19:33.397929 containerd[1572]: time="2025-02-13T19:19:33.397857695Z" level=info msg="StartContainer for \"9ccfa0912590012d4bdc549da28ed609b5fc920db47c3576638b9d7c214efda5\" returns successfully" Feb 13 19:19:33.421319 systemd-networkd[1260]: calie8cd9dd2685: Gained IPv6LL Feb 13 19:19:33.956197 kubelet[2047]: E0213 19:19:33.956167 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:34.956629 kubelet[2047]: E0213 19:19:34.956595 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:35.855622 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1028500325.mount: Deactivated successfully. Feb 13 19:19:35.957489 kubelet[2047]: E0213 19:19:35.957319 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:36.598133 containerd[1572]: time="2025-02-13T19:19:36.597942728Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:36.599053 containerd[1572]: time="2025-02-13T19:19:36.598693085Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 19:19:36.599103 containerd[1572]: time="2025-02-13T19:19:36.599089791Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:36.600580 containerd[1572]: time="2025-02-13T19:19:36.600562227Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:36.601541 containerd[1572]: time="2025-02-13T19:19:36.601523746Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 3.257641399s" Feb 13 19:19:36.601581 containerd[1572]: time="2025-02-13T19:19:36.601541997Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:19:36.602635 containerd[1572]: time="2025-02-13T19:19:36.602618439Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 19:19:36.607419 containerd[1572]: time="2025-02-13T19:19:36.607397954Z" level=info msg="CreateContainer within sandbox \"a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 19:19:36.630454 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1334501235.mount: Deactivated successfully. Feb 13 19:19:36.646203 containerd[1572]: time="2025-02-13T19:19:36.646172907Z" level=info msg="CreateContainer within sandbox \"a5a3304cb2585817a3ae177548a71ac1afe05bb2c6a3292bd7e8ec461d429fe8\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"1cfc4816f0027594b98947a2bf3ecbf25d5454537646c177c9021fa5db2c39b4\"" Feb 13 19:19:36.662527 containerd[1572]: time="2025-02-13T19:19:36.662485312Z" level=info msg="StartContainer for \"1cfc4816f0027594b98947a2bf3ecbf25d5454537646c177c9021fa5db2c39b4\"" Feb 13 19:19:36.685251 systemd[1]: Started cri-containerd-1cfc4816f0027594b98947a2bf3ecbf25d5454537646c177c9021fa5db2c39b4.scope - libcontainer container 1cfc4816f0027594b98947a2bf3ecbf25d5454537646c177c9021fa5db2c39b4. Feb 13 19:19:36.699501 containerd[1572]: time="2025-02-13T19:19:36.699481421Z" level=info msg="StartContainer for \"1cfc4816f0027594b98947a2bf3ecbf25d5454537646c177c9021fa5db2c39b4\" returns successfully" Feb 13 19:19:36.958264 kubelet[2047]: E0213 19:19:36.958166 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:37.958999 kubelet[2047]: E0213 19:19:37.958961 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:38.492943 containerd[1572]: time="2025-02-13T19:19:38.492902971Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:38.503432 containerd[1572]: time="2025-02-13T19:19:38.503389531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 19:19:38.513704 containerd[1572]: time="2025-02-13T19:19:38.513669937Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:38.525420 containerd[1572]: time="2025-02-13T19:19:38.525027989Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:38.525770 containerd[1572]: time="2025-02-13T19:19:38.525742367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.923105658s" Feb 13 19:19:38.525817 containerd[1572]: time="2025-02-13T19:19:38.525769450Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 19:19:38.527165 containerd[1572]: time="2025-02-13T19:19:38.527142683Z" level=info msg="CreateContainer within sandbox \"dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 19:19:38.601092 containerd[1572]: time="2025-02-13T19:19:38.601030258Z" level=info msg="CreateContainer within sandbox \"dc786c6e0a3276a3406411b252c5c0d95d55bf303131f57ee3ba9f11a17488de\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"70610786b74eae496127816179ca15f8f6b56d064d914168b25d1715244fc546\"" Feb 13 19:19:38.601502 containerd[1572]: time="2025-02-13T19:19:38.601420821Z" level=info msg="StartContainer for \"70610786b74eae496127816179ca15f8f6b56d064d914168b25d1715244fc546\"" Feb 13 19:19:38.633286 systemd[1]: Started cri-containerd-70610786b74eae496127816179ca15f8f6b56d064d914168b25d1715244fc546.scope - libcontainer container 70610786b74eae496127816179ca15f8f6b56d064d914168b25d1715244fc546. Feb 13 19:19:38.636331 kubelet[2047]: I0213 19:19:38.635307 2047 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 19:19:38.670384 containerd[1572]: time="2025-02-13T19:19:38.670363072Z" level=info msg="StartContainer for \"70610786b74eae496127816179ca15f8f6b56d064d914168b25d1715244fc546\" returns successfully" Feb 13 19:19:38.887068 kubelet[2047]: I0213 19:19:38.887031 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-vhbmc" podStartSLOduration=8.932197438 podStartE2EDuration="13.887018505s" podCreationTimestamp="2025-02-13 19:19:25 +0000 UTC" firstStartedPulling="2025-02-13 19:19:31.647259688 +0000 UTC m=+19.098402045" lastFinishedPulling="2025-02-13 19:19:36.602080755 +0000 UTC m=+24.053223112" observedRunningTime="2025-02-13 19:19:37.110185978 +0000 UTC m=+24.561328344" watchObservedRunningTime="2025-02-13 19:19:38.887018505 +0000 UTC m=+26.338160861" Feb 13 19:19:38.959818 kubelet[2047]: E0213 19:19:38.959784 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:39.024462 kubelet[2047]: I0213 19:19:39.024444 2047 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 19:19:39.024462 kubelet[2047]: I0213 19:19:39.024462 2047 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 19:19:39.960192 kubelet[2047]: E0213 19:19:39.960156 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:40.960915 kubelet[2047]: E0213 19:19:40.960880 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:41.961304 kubelet[2047]: E0213 19:19:41.961273 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:42.967938 kubelet[2047]: E0213 19:19:42.967909 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:43.968041 kubelet[2047]: E0213 19:19:43.967989 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:44.662439 kubelet[2047]: I0213 19:19:44.662371 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-lgcrw" podStartSLOduration=24.663612806 podStartE2EDuration="31.662359943s" podCreationTimestamp="2025-02-13 19:19:13 +0000 UTC" firstStartedPulling="2025-02-13 19:19:31.527476124 +0000 UTC m=+18.978618482" lastFinishedPulling="2025-02-13 19:19:38.526223258 +0000 UTC m=+25.977365619" observedRunningTime="2025-02-13 19:19:39.123145828 +0000 UTC m=+26.574288188" watchObservedRunningTime="2025-02-13 19:19:44.662359943 +0000 UTC m=+32.113502308" Feb 13 19:19:44.667841 systemd[1]: Created slice kubepods-besteffort-podea8c6480_6a95_4392_849b_b1bc1364da0d.slice - libcontainer container kubepods-besteffort-podea8c6480_6a95_4392_849b_b1bc1364da0d.slice. Feb 13 19:19:44.818809 kubelet[2047]: I0213 19:19:44.818739 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cdnsj\" (UniqueName: \"kubernetes.io/projected/ea8c6480-6a95-4392-849b-b1bc1364da0d-kube-api-access-cdnsj\") pod \"nfs-server-provisioner-0\" (UID: \"ea8c6480-6a95-4392-849b-b1bc1364da0d\") " pod="default/nfs-server-provisioner-0" Feb 13 19:19:44.818809 kubelet[2047]: I0213 19:19:44.818771 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/ea8c6480-6a95-4392-849b-b1bc1364da0d-data\") pod \"nfs-server-provisioner-0\" (UID: \"ea8c6480-6a95-4392-849b-b1bc1364da0d\") " pod="default/nfs-server-provisioner-0" Feb 13 19:19:44.968214 kubelet[2047]: E0213 19:19:44.968127 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:44.970986 containerd[1572]: time="2025-02-13T19:19:44.970963412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ea8c6480-6a95-4392-849b-b1bc1364da0d,Namespace:default,Attempt:0,}" Feb 13 19:19:45.055623 systemd-networkd[1260]: cali60e51b789ff: Link UP Feb 13 19:19:45.056256 systemd-networkd[1260]: cali60e51b789ff: Gained carrier Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.005 [INFO][3500] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.124.142-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default ea8c6480-6a95-4392-849b-b1bc1364da0d 1198 0 2025-02-13 19:19:44 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.67.124.142 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.124.142-k8s-nfs--server--provisioner--0-" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.005 [INFO][3500] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.028 [INFO][3511] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" HandleID="k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Workload="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.035 [INFO][3511] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" HandleID="k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Workload="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290830), Attrs:map[string]string{"namespace":"default", "node":"10.67.124.142", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 19:19:45.028712397 +0000 UTC"}, Hostname:"10.67.124.142", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.035 [INFO][3511] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.035 [INFO][3511] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.035 [INFO][3511] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.124.142' Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.036 [INFO][3511] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.038 [INFO][3511] ipam/ipam.go 372: Looking up existing affinities for host host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.040 [INFO][3511] ipam/ipam.go 489: Trying affinity for 192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.042 [INFO][3511] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.043 [INFO][3511] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.043 [INFO][3511] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.044 [INFO][3511] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.046 [INFO][3511] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.052 [INFO][3511] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.3/26] block=192.168.47.0/26 handle="k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.052 [INFO][3511] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.3/26] handle="k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" host="10.67.124.142" Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.052 [INFO][3511] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:19:45.064793 containerd[1572]: 2025-02-13 19:19:45.052 [INFO][3511] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.3/26] IPv6=[] ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" HandleID="k8s-pod-network.e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Workload="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:19:45.065407 containerd[1572]: 2025-02-13 19:19:45.054 [INFO][3500] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.142-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ea8c6480-6a95-4392-849b-b1bc1364da0d", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.124.142", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.47.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:19:45.065407 containerd[1572]: 2025-02-13 19:19:45.054 [INFO][3500] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.3/32] ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:19:45.065407 containerd[1572]: 2025-02-13 19:19:45.054 [INFO][3500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:19:45.065407 containerd[1572]: 2025-02-13 19:19:45.056 [INFO][3500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:19:45.065522 containerd[1572]: 2025-02-13 19:19:45.057 [INFO][3500] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.142-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"ea8c6480-6a95-4392-849b-b1bc1364da0d", ResourceVersion:"1198", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.124.142", ContainerID:"e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.47.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"76:7b:dc:07:be:4d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:19:45.065522 containerd[1572]: 2025-02-13 19:19:45.063 [INFO][3500] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.67.124.142-k8s-nfs--server--provisioner--0-eth0" Feb 13 19:19:45.080362 containerd[1572]: time="2025-02-13T19:19:45.079680058Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:45.080362 containerd[1572]: time="2025-02-13T19:19:45.079750155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:45.080362 containerd[1572]: time="2025-02-13T19:19:45.079768025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:45.080362 containerd[1572]: time="2025-02-13T19:19:45.079837110Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:45.105295 systemd[1]: Started cri-containerd-e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de.scope - libcontainer container e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de. Feb 13 19:19:45.114161 systemd-resolved[1487]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:19:45.137712 containerd[1572]: time="2025-02-13T19:19:45.137688318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:ea8c6480-6a95-4392-849b-b1bc1364da0d,Namespace:default,Attempt:0,} returns sandbox id \"e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de\"" Feb 13 19:19:45.139080 containerd[1572]: time="2025-02-13T19:19:45.139054460Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 19:19:45.968663 kubelet[2047]: E0213 19:19:45.968639 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:46.970849 kubelet[2047]: E0213 19:19:46.970809 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:46.989205 systemd-networkd[1260]: cali60e51b789ff: Gained IPv6LL Feb 13 19:19:47.195698 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount125735748.mount: Deactivated successfully. Feb 13 19:19:47.971743 kubelet[2047]: E0213 19:19:47.971633 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:48.972548 kubelet[2047]: E0213 19:19:48.972435 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:49.161299 containerd[1572]: time="2025-02-13T19:19:49.161224993Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:49.212787 containerd[1572]: time="2025-02-13T19:19:49.212735288Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 19:19:49.243923 containerd[1572]: time="2025-02-13T19:19:49.243642909Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:49.262094 containerd[1572]: time="2025-02-13T19:19:49.262059006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:49.263054 containerd[1572]: time="2025-02-13T19:19:49.263028923Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.123950104s" Feb 13 19:19:49.263054 containerd[1572]: time="2025-02-13T19:19:49.263052366Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 19:19:49.264941 containerd[1572]: time="2025-02-13T19:19:49.264909585Z" level=info msg="CreateContainer within sandbox \"e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 19:19:49.393204 containerd[1572]: time="2025-02-13T19:19:49.393168233Z" level=info msg="CreateContainer within sandbox \"e21b5a5f250c22da78dd0906a086a7df5c0e90be468cf6e5b0c1202188b959de\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ee549123193540df989a3aebd3c5652a1756a1e745bcb1d1d3899083b2b5cf53\"" Feb 13 19:19:49.393878 containerd[1572]: time="2025-02-13T19:19:49.393664254Z" level=info msg="StartContainer for \"ee549123193540df989a3aebd3c5652a1756a1e745bcb1d1d3899083b2b5cf53\"" Feb 13 19:19:49.423282 systemd[1]: Started cri-containerd-ee549123193540df989a3aebd3c5652a1756a1e745bcb1d1d3899083b2b5cf53.scope - libcontainer container ee549123193540df989a3aebd3c5652a1756a1e745bcb1d1d3899083b2b5cf53. Feb 13 19:19:49.453515 containerd[1572]: time="2025-02-13T19:19:49.453109124Z" level=info msg="StartContainer for \"ee549123193540df989a3aebd3c5652a1756a1e745bcb1d1d3899083b2b5cf53\" returns successfully" Feb 13 19:19:49.972720 kubelet[2047]: E0213 19:19:49.972678 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:50.973321 kubelet[2047]: E0213 19:19:50.973287 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:51.973724 kubelet[2047]: E0213 19:19:51.973689 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:52.937333 kubelet[2047]: E0213 19:19:52.937302 2047 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:52.973793 kubelet[2047]: E0213 19:19:52.973759 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:53.974203 kubelet[2047]: E0213 19:19:53.974167 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:54.974406 kubelet[2047]: E0213 19:19:54.974373 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:55.975127 kubelet[2047]: E0213 19:19:55.975068 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:56.976138 kubelet[2047]: E0213 19:19:56.976088 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:57.976311 kubelet[2047]: E0213 19:19:57.976243 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:58.730609 kubelet[2047]: I0213 19:19:58.730546 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.605690756 podStartE2EDuration="14.730531932s" podCreationTimestamp="2025-02-13 19:19:44 +0000 UTC" firstStartedPulling="2025-02-13 19:19:45.138701386 +0000 UTC m=+32.589843747" lastFinishedPulling="2025-02-13 19:19:49.263542567 +0000 UTC m=+36.714684923" observedRunningTime="2025-02-13 19:19:50.165701504 +0000 UTC m=+37.616843868" watchObservedRunningTime="2025-02-13 19:19:58.730531932 +0000 UTC m=+46.181674299" Feb 13 19:19:58.737982 systemd[1]: Created slice kubepods-besteffort-podeb0a01cf_c99d_4eeb_8b01_218565d96674.slice - libcontainer container kubepods-besteffort-podeb0a01cf_c99d_4eeb_8b01_218565d96674.slice. Feb 13 19:19:58.794212 kubelet[2047]: I0213 19:19:58.794187 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-613edc72-14c0-4eef-8c50-ce1d18f99b2f\" (UniqueName: \"kubernetes.io/nfs/eb0a01cf-c99d-4eeb-8b01-218565d96674-pvc-613edc72-14c0-4eef-8c50-ce1d18f99b2f\") pod \"test-pod-1\" (UID: \"eb0a01cf-c99d-4eeb-8b01-218565d96674\") " pod="default/test-pod-1" Feb 13 19:19:58.794212 kubelet[2047]: I0213 19:19:58.794215 2047 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zbgj4\" (UniqueName: \"kubernetes.io/projected/eb0a01cf-c99d-4eeb-8b01-218565d96674-kube-api-access-zbgj4\") pod \"test-pod-1\" (UID: \"eb0a01cf-c99d-4eeb-8b01-218565d96674\") " pod="default/test-pod-1" Feb 13 19:19:58.920138 kernel: FS-Cache: Loaded Feb 13 19:19:58.958225 kernel: RPC: Registered named UNIX socket transport module. Feb 13 19:19:58.958301 kernel: RPC: Registered udp transport module. Feb 13 19:19:58.959106 kernel: RPC: Registered tcp transport module. Feb 13 19:19:58.959144 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 19:19:58.959267 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 19:19:58.976460 kubelet[2047]: E0213 19:19:58.976439 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:19:59.092320 kernel: NFS: Registering the id_resolver key type Feb 13 19:19:59.092408 kernel: Key type id_resolver registered Feb 13 19:19:59.092435 kernel: Key type id_legacy registered Feb 13 19:19:59.108061 nfsidmap[3701]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:19:59.110257 nfsidmap[3702]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Feb 13 19:19:59.340907 containerd[1572]: time="2025-02-13T19:19:59.340882969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:eb0a01cf-c99d-4eeb-8b01-218565d96674,Namespace:default,Attempt:0,}" Feb 13 19:19:59.409254 systemd-networkd[1260]: cali5ec59c6bf6e: Link UP Feb 13 19:19:59.410126 systemd-networkd[1260]: cali5ec59c6bf6e: Gained carrier Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.367 [INFO][3705] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.67.124.142-k8s-test--pod--1-eth0 default eb0a01cf-c99d-4eeb-8b01-218565d96674 1257 0 2025-02-13 19:19:44 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.67.124.142 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.124.142-k8s-test--pod--1-" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.367 [INFO][3705] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.124.142-k8s-test--pod--1-eth0" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.387 [INFO][3715] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" HandleID="k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Workload="10.67.124.142-k8s-test--pod--1-eth0" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.393 [INFO][3715] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" HandleID="k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Workload="10.67.124.142-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ae320), Attrs:map[string]string{"namespace":"default", "node":"10.67.124.142", "pod":"test-pod-1", "timestamp":"2025-02-13 19:19:59.387810217 +0000 UTC"}, Hostname:"10.67.124.142", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.393 [INFO][3715] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.393 [INFO][3715] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.393 [INFO][3715] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.67.124.142' Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.394 [INFO][3715] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.396 [INFO][3715] ipam/ipam.go 372: Looking up existing affinities for host host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.398 [INFO][3715] ipam/ipam.go 489: Trying affinity for 192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.399 [INFO][3715] ipam/ipam.go 155: Attempting to load block cidr=192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.400 [INFO][3715] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.47.0/26 host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.400 [INFO][3715] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.47.0/26 handle="k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.401 [INFO][3715] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2 Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.403 [INFO][3715] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.47.0/26 handle="k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.406 [INFO][3715] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.47.4/26] block=192.168.47.0/26 handle="k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.406 [INFO][3715] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.47.4/26] handle="k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" host="10.67.124.142" Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.406 [INFO][3715] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 19:19:59.416197 containerd[1572]: 2025-02-13 19:19:59.406 [INFO][3715] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.47.4/26] IPv6=[] ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" HandleID="k8s-pod-network.a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Workload="10.67.124.142-k8s-test--pod--1-eth0" Feb 13 19:19:59.416759 containerd[1572]: 2025-02-13 19:19:59.407 [INFO][3705] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.124.142-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.142-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"eb0a01cf-c99d-4eeb-8b01-218565d96674", ResourceVersion:"1257", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.124.142", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.47.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:19:59.416759 containerd[1572]: 2025-02-13 19:19:59.407 [INFO][3705] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.47.4/32] ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.124.142-k8s-test--pod--1-eth0" Feb 13 19:19:59.416759 containerd[1572]: 2025-02-13 19:19:59.407 [INFO][3705] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.124.142-k8s-test--pod--1-eth0" Feb 13 19:19:59.416759 containerd[1572]: 2025-02-13 19:19:59.410 [INFO][3705] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.124.142-k8s-test--pod--1-eth0" Feb 13 19:19:59.416759 containerd[1572]: 2025-02-13 19:19:59.410 [INFO][3705] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.124.142-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.67.124.142-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"eb0a01cf-c99d-4eeb-8b01-218565d96674", ResourceVersion:"1257", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 19, 19, 44, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.67.124.142", ContainerID:"a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.47.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"22:c4:fe:6d:95:55", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 19:19:59.416759 containerd[1572]: 2025-02-13 19:19:59.415 [INFO][3705] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.67.124.142-k8s-test--pod--1-eth0" Feb 13 19:19:59.430333 containerd[1572]: time="2025-02-13T19:19:59.430158481Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 19:19:59.430333 containerd[1572]: time="2025-02-13T19:19:59.430205234Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 19:19:59.430333 containerd[1572]: time="2025-02-13T19:19:59.430215810Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:59.430333 containerd[1572]: time="2025-02-13T19:19:59.430294360Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 19:19:59.445257 systemd[1]: Started cri-containerd-a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2.scope - libcontainer container a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2. Feb 13 19:19:59.453616 systemd-resolved[1487]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Feb 13 19:19:59.472559 containerd[1572]: time="2025-02-13T19:19:59.472511184Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:eb0a01cf-c99d-4eeb-8b01-218565d96674,Namespace:default,Attempt:0,} returns sandbox id \"a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2\"" Feb 13 19:19:59.473824 containerd[1572]: time="2025-02-13T19:19:59.473754332Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 19:19:59.921127 containerd[1572]: time="2025-02-13T19:19:59.921089956Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 19:19:59.922360 containerd[1572]: time="2025-02-13T19:19:59.922329849Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 19:19:59.923857 containerd[1572]: time="2025-02-13T19:19:59.923809419Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 450.037481ms" Feb 13 19:19:59.923857 containerd[1572]: time="2025-02-13T19:19:59.923825736Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 19:19:59.924988 containerd[1572]: time="2025-02-13T19:19:59.924972585Z" level=info msg="CreateContainer within sandbox \"a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 19:19:59.934766 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2884564001.mount: Deactivated successfully. Feb 13 19:19:59.936768 containerd[1572]: time="2025-02-13T19:19:59.936746913Z" level=info msg="CreateContainer within sandbox \"a74628f85648e5c141df77aa60eaaf81af5e11a8a0cf7f8a39eadf27379d12d2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"40b236e0ca9ba2fa3b85b7b7c7aedc083f801cd30c0d601ce3e2f68b95706c79\"" Feb 13 19:19:59.937599 containerd[1572]: time="2025-02-13T19:19:59.937163618Z" level=info msg="StartContainer for \"40b236e0ca9ba2fa3b85b7b7c7aedc083f801cd30c0d601ce3e2f68b95706c79\"" Feb 13 19:19:59.957214 systemd[1]: Started cri-containerd-40b236e0ca9ba2fa3b85b7b7c7aedc083f801cd30c0d601ce3e2f68b95706c79.scope - libcontainer container 40b236e0ca9ba2fa3b85b7b7c7aedc083f801cd30c0d601ce3e2f68b95706c79. Feb 13 19:19:59.970544 containerd[1572]: time="2025-02-13T19:19:59.970519341Z" level=info msg="StartContainer for \"40b236e0ca9ba2fa3b85b7b7c7aedc083f801cd30c0d601ce3e2f68b95706c79\" returns successfully" Feb 13 19:19:59.977576 kubelet[2047]: E0213 19:19:59.977549 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:20:00.168425 kubelet[2047]: I0213 19:20:00.168383 2047 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=15.717530429 podStartE2EDuration="16.168369033s" podCreationTimestamp="2025-02-13 19:19:44 +0000 UTC" firstStartedPulling="2025-02-13 19:19:59.473369027 +0000 UTC m=+46.924511385" lastFinishedPulling="2025-02-13 19:19:59.924207633 +0000 UTC m=+47.375349989" observedRunningTime="2025-02-13 19:20:00.168006941 +0000 UTC m=+47.619149314" watchObservedRunningTime="2025-02-13 19:20:00.168369033 +0000 UTC m=+47.619511400" Feb 13 19:20:00.685231 systemd-networkd[1260]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 19:20:00.978021 kubelet[2047]: E0213 19:20:00.977907 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:20:01.978847 kubelet[2047]: E0213 19:20:01.978808 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 19:20:02.979702 kubelet[2047]: E0213 19:20:02.979673 2047 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"