Jan 30 13:53:59.735050 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:53:59.735066 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:59.735072 kernel: Disabled fast string operations Jan 30 13:53:59.735076 kernel: BIOS-provided physical RAM map: Jan 30 13:53:59.735080 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 30 13:53:59.735084 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 30 13:53:59.735090 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 30 13:53:59.735094 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 30 13:53:59.735099 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 30 13:53:59.735103 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 30 13:53:59.735107 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 30 13:53:59.735111 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 30 13:53:59.735115 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 30 13:53:59.735119 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 13:53:59.735125 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 30 13:53:59.735130 kernel: NX (Execute Disable) protection: active Jan 30 13:53:59.735135 kernel: APIC: Static calls initialized Jan 30 13:53:59.735139 kernel: SMBIOS 2.7 present. Jan 30 13:53:59.735144 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 30 13:53:59.735148 kernel: vmware: hypercall mode: 0x00 Jan 30 13:53:59.735153 kernel: Hypervisor detected: VMware Jan 30 13:53:59.735158 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 30 13:53:59.735164 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 30 13:53:59.735168 kernel: vmware: using clock offset of 2572950796 ns Jan 30 13:53:59.735173 kernel: tsc: Detected 3408.000 MHz processor Jan 30 13:53:59.735178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:53:59.735183 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:53:59.735188 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 30 13:53:59.735192 kernel: total RAM covered: 3072M Jan 30 13:53:59.735197 kernel: Found optimal setting for mtrr clean up Jan 30 13:53:59.735202 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 30 13:53:59.735208 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 30 13:53:59.735213 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:53:59.735218 kernel: Using GB pages for direct mapping Jan 30 13:53:59.735222 kernel: ACPI: Early table checksum verification disabled Jan 30 13:53:59.735227 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 30 13:53:59.735232 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 30 13:53:59.735236 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 30 13:53:59.735241 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 30 13:53:59.735246 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 30 13:53:59.735254 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 30 13:53:59.735258 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 30 13:53:59.735264 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 30 13:53:59.735269 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 30 13:53:59.735274 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 30 13:53:59.735280 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 30 13:53:59.735285 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 30 13:53:59.735290 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 30 13:53:59.735295 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 30 13:53:59.735300 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 30 13:53:59.735305 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 30 13:53:59.735310 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 30 13:53:59.735315 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 30 13:53:59.735320 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 30 13:53:59.735325 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 30 13:53:59.735331 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 30 13:53:59.735336 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 30 13:53:59.735341 kernel: system APIC only can use physical flat Jan 30 13:53:59.735345 kernel: APIC: Switched APIC routing to: physical flat Jan 30 13:53:59.735350 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:53:59.735356 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 30 13:53:59.735361 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 30 13:53:59.735365 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 30 13:53:59.735370 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 30 13:53:59.735376 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 30 13:53:59.735381 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 30 13:53:59.735386 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 30 13:53:59.735391 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 30 13:53:59.735396 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 30 13:53:59.735401 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 30 13:53:59.735406 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 30 13:53:59.735410 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 30 13:53:59.735415 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 30 13:53:59.735420 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 30 13:53:59.735426 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 30 13:53:59.735431 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 30 13:53:59.735436 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 30 13:53:59.735441 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 30 13:53:59.735445 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 30 13:53:59.735450 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 30 13:53:59.735455 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 30 13:53:59.735460 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 30 13:53:59.735465 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 30 13:53:59.735470 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 30 13:53:59.735475 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 30 13:53:59.735481 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 30 13:53:59.735486 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 30 13:53:59.735490 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 30 13:53:59.735495 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 30 13:53:59.735500 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 30 13:53:59.735505 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 30 13:53:59.735510 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 30 13:53:59.735515 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 30 13:53:59.735520 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 30 13:53:59.735524 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 30 13:53:59.735531 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 30 13:53:59.735536 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 30 13:53:59.735541 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 30 13:53:59.735545 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 30 13:53:59.735550 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 30 13:53:59.735555 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 30 13:53:59.735560 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 30 13:53:59.735565 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 30 13:53:59.735569 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 30 13:53:59.735574 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 30 13:53:59.735580 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 30 13:53:59.735585 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 30 13:53:59.735590 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 30 13:53:59.735595 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 30 13:53:59.735600 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 30 13:53:59.735605 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 30 13:53:59.735610 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 30 13:53:59.735615 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 30 13:53:59.735620 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 30 13:53:59.735624 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 30 13:53:59.735631 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 30 13:53:59.735635 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 30 13:53:59.735640 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 30 13:53:59.735649 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 30 13:53:59.735655 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 30 13:53:59.735660 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 30 13:53:59.735666 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 30 13:53:59.735671 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 30 13:53:59.735676 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 30 13:53:59.735683 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 30 13:53:59.735688 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 30 13:53:59.735693 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 30 13:53:59.735698 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 30 13:53:59.735704 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 30 13:53:59.735756 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 30 13:53:59.735762 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 30 13:53:59.735767 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 30 13:53:59.735772 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 30 13:53:59.735778 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 30 13:53:59.735785 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 30 13:53:59.735790 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 30 13:53:59.735796 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 30 13:53:59.735801 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 30 13:53:59.735807 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 30 13:53:59.735812 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 30 13:53:59.735817 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 30 13:53:59.735822 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 30 13:53:59.735827 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 30 13:53:59.735833 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 30 13:53:59.735839 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 30 13:53:59.735844 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 30 13:53:59.735850 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 30 13:53:59.735855 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 30 13:53:59.735860 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 30 13:53:59.735865 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 30 13:53:59.735870 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 30 13:53:59.735875 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 30 13:53:59.735881 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 30 13:53:59.735886 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 30 13:53:59.735892 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 30 13:53:59.735898 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 30 13:53:59.735903 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 30 13:53:59.735908 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 30 13:53:59.735913 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 30 13:53:59.735919 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 30 13:53:59.735924 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 30 13:53:59.735929 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 30 13:53:59.735934 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 30 13:53:59.735940 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 30 13:53:59.735946 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 30 13:53:59.735951 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 30 13:53:59.735956 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 30 13:53:59.735961 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 30 13:53:59.735967 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 30 13:53:59.735972 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 30 13:53:59.735977 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 30 13:53:59.735982 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 30 13:53:59.735987 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 30 13:53:59.735992 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 30 13:53:59.735998 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 30 13:53:59.736004 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 30 13:53:59.736009 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 30 13:53:59.736015 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 30 13:53:59.736020 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 30 13:53:59.736025 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 30 13:53:59.736030 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 30 13:53:59.736035 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 30 13:53:59.736041 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 30 13:53:59.736046 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 30 13:53:59.736051 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 30 13:53:59.736058 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 30 13:53:59.736063 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 30 13:53:59.736069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:53:59.736074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:53:59.736079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 30 13:53:59.736085 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 30 13:53:59.736090 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 30 13:53:59.736096 kernel: Zone ranges: Jan 30 13:53:59.736101 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:53:59.736108 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 30 13:53:59.736113 kernel: Normal empty Jan 30 13:53:59.736118 kernel: Movable zone start for each node Jan 30 13:53:59.736124 kernel: Early memory node ranges Jan 30 13:53:59.736129 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 30 13:53:59.736134 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 30 13:53:59.736140 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 30 13:53:59.736145 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 30 13:53:59.736150 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:53:59.736155 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 30 13:53:59.736162 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 30 13:53:59.736167 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 30 13:53:59.736173 kernel: system APIC only can use physical flat Jan 30 13:53:59.736178 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 30 13:53:59.736184 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 13:53:59.736189 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 13:53:59.736194 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 13:53:59.736200 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 13:53:59.736205 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 13:53:59.736212 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 13:53:59.736217 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 13:53:59.736222 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 13:53:59.736228 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 13:53:59.736233 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 13:53:59.736238 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 13:53:59.736243 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 13:53:59.736248 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 13:53:59.736254 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 13:53:59.736259 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 13:53:59.736265 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 13:53:59.736271 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 30 13:53:59.736276 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 30 13:53:59.736281 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 30 13:53:59.736286 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 30 13:53:59.736292 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 30 13:53:59.736297 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 30 13:53:59.736302 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 30 13:53:59.736308 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 30 13:53:59.736313 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 30 13:53:59.736319 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 30 13:53:59.736325 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 30 13:53:59.736330 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 30 13:53:59.736335 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 30 13:53:59.736340 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 30 13:53:59.736346 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 30 13:53:59.736351 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 30 13:53:59.736356 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 30 13:53:59.736361 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 30 13:53:59.736368 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 30 13:53:59.736373 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 30 13:53:59.736378 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 30 13:53:59.736384 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 30 13:53:59.736389 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 30 13:53:59.736394 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 30 13:53:59.736400 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 30 13:53:59.736405 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 30 13:53:59.736410 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 30 13:53:59.736416 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 30 13:53:59.736422 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 30 13:53:59.736427 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 30 13:53:59.736433 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 30 13:53:59.736438 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 30 13:53:59.736443 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 30 13:53:59.736449 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 30 13:53:59.736454 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 30 13:53:59.736460 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 30 13:53:59.736465 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 30 13:53:59.736470 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 30 13:53:59.736476 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 30 13:53:59.736482 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 30 13:53:59.736487 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 30 13:53:59.736492 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 30 13:53:59.736497 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 30 13:53:59.736503 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 30 13:53:59.736508 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 30 13:53:59.736513 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 30 13:53:59.736518 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 30 13:53:59.736523 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 30 13:53:59.736530 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 30 13:53:59.736535 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 30 13:53:59.736541 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 30 13:53:59.736546 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 30 13:53:59.736552 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 30 13:53:59.736557 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 30 13:53:59.736562 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 30 13:53:59.736568 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 30 13:53:59.736573 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 30 13:53:59.736579 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 30 13:53:59.736585 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 30 13:53:59.736590 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 30 13:53:59.736595 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 30 13:53:59.736601 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 30 13:53:59.736606 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 30 13:53:59.736611 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 30 13:53:59.736616 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 30 13:53:59.736622 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 30 13:53:59.736627 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 30 13:53:59.736634 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 30 13:53:59.736639 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 30 13:53:59.736644 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 30 13:53:59.736649 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 30 13:53:59.736654 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 30 13:53:59.736660 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 30 13:53:59.736665 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 30 13:53:59.736671 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 30 13:53:59.736676 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 30 13:53:59.736681 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 30 13:53:59.736687 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 30 13:53:59.736693 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 30 13:53:59.736698 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 30 13:53:59.736703 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 30 13:53:59.736723 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 30 13:53:59.736729 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 30 13:53:59.736735 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 30 13:53:59.736740 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 30 13:53:59.736745 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 30 13:53:59.736753 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 30 13:53:59.736759 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 30 13:53:59.736764 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 30 13:53:59.736769 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 30 13:53:59.736774 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 30 13:53:59.736779 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 30 13:53:59.736785 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 30 13:53:59.736790 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 30 13:53:59.736795 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 30 13:53:59.736801 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 30 13:53:59.736807 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 30 13:53:59.736812 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 30 13:53:59.736817 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 30 13:53:59.736823 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 30 13:53:59.736828 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 30 13:53:59.736833 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 30 13:53:59.736839 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 30 13:53:59.736844 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 30 13:53:59.736849 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 30 13:53:59.736856 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 30 13:53:59.736861 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 30 13:53:59.736866 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 30 13:53:59.736872 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 30 13:53:59.736877 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 30 13:53:59.736882 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 30 13:53:59.736887 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:53:59.736893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 30 13:53:59.736898 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:53:59.736904 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 30 13:53:59.736910 kernel: TSC deadline timer available Jan 30 13:53:59.736915 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 30 13:53:59.736921 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 30 13:53:59.736926 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 30 13:53:59.736932 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:53:59.736937 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 30 13:53:59.736943 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 13:53:59.736948 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 13:53:59.736954 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 30 13:53:59.736960 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 30 13:53:59.736966 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 30 13:53:59.736971 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 30 13:53:59.736977 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 30 13:53:59.736990 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 30 13:53:59.736997 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 30 13:53:59.737002 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 30 13:53:59.737008 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 30 13:53:59.737013 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 30 13:53:59.737020 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 30 13:53:59.737026 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 30 13:53:59.737031 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 30 13:53:59.737037 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 30 13:53:59.737042 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 30 13:53:59.737048 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 30 13:53:59.737054 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:59.737060 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:53:59.737067 kernel: random: crng init done Jan 30 13:53:59.737073 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 30 13:53:59.737078 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 30 13:53:59.737084 kernel: printk: log_buf_len min size: 262144 bytes Jan 30 13:53:59.737090 kernel: printk: log_buf_len: 1048576 bytes Jan 30 13:53:59.737095 kernel: printk: early log buf free: 239648(91%) Jan 30 13:53:59.737101 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:53:59.737107 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:53:59.737112 kernel: Fallback order for Node 0: 0 Jan 30 13:53:59.737119 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 30 13:53:59.737125 kernel: Policy zone: DMA32 Jan 30 13:53:59.737131 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:53:59.737137 kernel: Memory: 1936388K/2096628K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 159980K reserved, 0K cma-reserved) Jan 30 13:53:59.737144 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 30 13:53:59.737151 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:53:59.737156 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:53:59.737162 kernel: Dynamic Preempt: voluntary Jan 30 13:53:59.737168 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:53:59.737174 kernel: rcu: RCU event tracing is enabled. Jan 30 13:53:59.737180 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 30 13:53:59.737185 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:53:59.737191 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:53:59.737197 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:53:59.737202 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:53:59.737213 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 30 13:53:59.737219 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 30 13:53:59.737225 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 30 13:53:59.737231 kernel: Console: colour VGA+ 80x25 Jan 30 13:53:59.737236 kernel: printk: console [tty0] enabled Jan 30 13:53:59.737242 kernel: printk: console [ttyS0] enabled Jan 30 13:53:59.737248 kernel: ACPI: Core revision 20230628 Jan 30 13:53:59.737253 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 30 13:53:59.737259 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:53:59.737266 kernel: x2apic enabled Jan 30 13:53:59.737272 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:53:59.737278 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:53:59.737284 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 30 13:53:59.737289 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 30 13:53:59.737295 kernel: Disabled fast string operations Jan 30 13:53:59.737300 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:53:59.737307 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:53:59.737313 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:53:59.737320 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:53:59.737326 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:53:59.737332 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 13:53:59.737337 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:53:59.737343 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 13:53:59.737349 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 13:53:59.737355 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:53:59.737360 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:53:59.737366 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:53:59.737373 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 30 13:53:59.737379 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:53:59.737385 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:53:59.737390 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:53:59.737396 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:53:59.737402 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:53:59.737408 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:53:59.737414 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:53:59.737419 kernel: pid_max: default: 131072 minimum: 1024 Jan 30 13:53:59.737426 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:53:59.737432 kernel: landlock: Up and running. Jan 30 13:53:59.737438 kernel: SELinux: Initializing. Jan 30 13:53:59.737444 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:53:59.737450 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:53:59.737455 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 13:53:59.737461 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 30 13:53:59.737467 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 30 13:53:59.737473 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 30 13:53:59.737479 kernel: Performance Events: Skylake events, core PMU driver. Jan 30 13:53:59.737485 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 30 13:53:59.737491 kernel: core: CPUID marked event: 'instructions' unavailable Jan 30 13:53:59.737496 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 30 13:53:59.737502 kernel: core: CPUID marked event: 'cache references' unavailable Jan 30 13:53:59.737507 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 30 13:53:59.737513 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 30 13:53:59.737518 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 30 13:53:59.737525 kernel: ... version: 1 Jan 30 13:53:59.737531 kernel: ... bit width: 48 Jan 30 13:53:59.737537 kernel: ... generic registers: 4 Jan 30 13:53:59.737543 kernel: ... value mask: 0000ffffffffffff Jan 30 13:53:59.737548 kernel: ... max period: 000000007fffffff Jan 30 13:53:59.737554 kernel: ... fixed-purpose events: 0 Jan 30 13:53:59.737560 kernel: ... event mask: 000000000000000f Jan 30 13:53:59.737565 kernel: signal: max sigframe size: 1776 Jan 30 13:53:59.737571 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:53:59.737578 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:53:59.737584 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:53:59.737590 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:53:59.737595 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:53:59.737620 kernel: .... node #0, CPUs: #1 Jan 30 13:53:59.737626 kernel: Disabled fast string operations Jan 30 13:53:59.737631 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 30 13:53:59.737637 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 30 13:53:59.737667 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:53:59.737673 kernel: smpboot: Max logical packages: 128 Jan 30 13:53:59.737680 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 30 13:53:59.737686 kernel: devtmpfs: initialized Jan 30 13:53:59.737691 kernel: x86/mm: Memory block size: 128MB Jan 30 13:53:59.737698 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 30 13:53:59.737704 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:53:59.737725 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 30 13:53:59.737731 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:53:59.737737 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:53:59.737743 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:53:59.737751 kernel: audit: type=2000 audit(1738245238.067:1): state=initialized audit_enabled=0 res=1 Jan 30 13:53:59.737756 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:53:59.737762 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:53:59.737767 kernel: cpuidle: using governor menu Jan 30 13:53:59.737774 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 30 13:53:59.737780 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:53:59.737785 kernel: dca service started, version 1.12.1 Jan 30 13:53:59.737791 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 30 13:53:59.737797 kernel: PCI: Using configuration type 1 for base access Jan 30 13:53:59.737804 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:53:59.737809 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:53:59.737815 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:53:59.737821 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:53:59.737827 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:53:59.737832 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:53:59.737838 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:53:59.737844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:53:59.737850 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:53:59.737856 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:53:59.737863 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 30 13:53:59.737868 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:53:59.737874 kernel: ACPI: Interpreter enabled Jan 30 13:53:59.737880 kernel: ACPI: PM: (supports S0 S1 S5) Jan 30 13:53:59.737886 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:53:59.737892 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:53:59.737898 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:53:59.737903 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 30 13:53:59.737910 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 30 13:53:59.737993 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:53:59.738051 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 30 13:53:59.738100 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 30 13:53:59.738109 kernel: PCI host bridge to bus 0000:00 Jan 30 13:53:59.738158 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:53:59.738229 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 30 13:53:59.738276 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:53:59.738319 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:53:59.738364 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 30 13:53:59.738407 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 30 13:53:59.738468 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 30 13:53:59.738540 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 30 13:53:59.738646 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 30 13:53:59.739008 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 30 13:53:59.739074 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 30 13:53:59.739127 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:53:59.739179 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:53:59.739231 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:53:59.739287 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:53:59.739346 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 30 13:53:59.739399 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 30 13:53:59.739453 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 30 13:53:59.739509 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 30 13:53:59.739562 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 30 13:53:59.739616 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 30 13:53:59.739670 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 30 13:53:59.739915 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 30 13:53:59.739971 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 30 13:53:59.740022 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 30 13:53:59.740071 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 30 13:53:59.740121 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:53:59.740177 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 30 13:53:59.740239 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.740294 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.742913 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743006 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743068 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743122 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743186 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743247 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743306 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743358 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743413 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743466 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743526 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743577 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743632 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743685 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.745861 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.745950 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746047 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746103 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746161 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746214 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746272 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746327 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746382 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746434 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746490 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746541 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746597 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746648 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.749724 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.749810 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.749872 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.749925 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750003 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750060 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750118 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750168 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750242 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750310 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750364 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750414 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750471 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750521 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750575 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750643 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750698 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750793 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750854 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750904 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750960 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751010 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751100 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751152 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751207 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751262 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751316 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751367 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751422 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751473 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751527 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751582 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751637 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751688 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.753129 kernel: pci_bus 0000:01: extended config space not accessible Jan 30 13:53:59.753191 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:53:59.753253 kernel: pci_bus 0000:02: extended config space not accessible Jan 30 13:53:59.753265 kernel: acpiphp: Slot [32] registered Jan 30 13:53:59.753272 kernel: acpiphp: Slot [33] registered Jan 30 13:53:59.753278 kernel: acpiphp: Slot [34] registered Jan 30 13:53:59.753284 kernel: acpiphp: Slot [35] registered Jan 30 13:53:59.753290 kernel: acpiphp: Slot [36] registered Jan 30 13:53:59.753295 kernel: acpiphp: Slot [37] registered Jan 30 13:53:59.753301 kernel: acpiphp: Slot [38] registered Jan 30 13:53:59.753307 kernel: acpiphp: Slot [39] registered Jan 30 13:53:59.753313 kernel: acpiphp: Slot [40] registered Jan 30 13:53:59.753321 kernel: acpiphp: Slot [41] registered Jan 30 13:53:59.753326 kernel: acpiphp: Slot [42] registered Jan 30 13:53:59.753332 kernel: acpiphp: Slot [43] registered Jan 30 13:53:59.753338 kernel: acpiphp: Slot [44] registered Jan 30 13:53:59.753344 kernel: acpiphp: Slot [45] registered Jan 30 13:53:59.753350 kernel: acpiphp: Slot [46] registered Jan 30 13:53:59.753356 kernel: acpiphp: Slot [47] registered Jan 30 13:53:59.753362 kernel: acpiphp: Slot [48] registered Jan 30 13:53:59.753368 kernel: acpiphp: Slot [49] registered Jan 30 13:53:59.753374 kernel: acpiphp: Slot [50] registered Jan 30 13:53:59.753381 kernel: acpiphp: Slot [51] registered Jan 30 13:53:59.753387 kernel: acpiphp: Slot [52] registered Jan 30 13:53:59.753394 kernel: acpiphp: Slot [53] registered Jan 30 13:53:59.753399 kernel: acpiphp: Slot [54] registered Jan 30 13:53:59.753405 kernel: acpiphp: Slot [55] registered Jan 30 13:53:59.753411 kernel: acpiphp: Slot [56] registered Jan 30 13:53:59.753417 kernel: acpiphp: Slot [57] registered Jan 30 13:53:59.753423 kernel: acpiphp: Slot [58] registered Jan 30 13:53:59.753429 kernel: acpiphp: Slot [59] registered Jan 30 13:53:59.753436 kernel: acpiphp: Slot [60] registered Jan 30 13:53:59.753441 kernel: acpiphp: Slot [61] registered Jan 30 13:53:59.753447 kernel: acpiphp: Slot [62] registered Jan 30 13:53:59.753453 kernel: acpiphp: Slot [63] registered Jan 30 13:53:59.753506 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 30 13:53:59.753557 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 30 13:53:59.753608 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 30 13:53:59.753657 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 30 13:53:59.754751 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 30 13:53:59.754817 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 30 13:53:59.754868 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 30 13:53:59.754919 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 30 13:53:59.754992 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 30 13:53:59.755078 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 30 13:53:59.755138 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 30 13:53:59.755190 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 30 13:53:59.755246 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 30 13:53:59.755298 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.755351 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 30 13:53:59.755406 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 30 13:53:59.755457 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 30 13:53:59.755507 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 30 13:53:59.755561 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 30 13:53:59.755615 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 30 13:53:59.755665 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 30 13:53:59.756730 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 30 13:53:59.756790 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 30 13:53:59.756841 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 30 13:53:59.756892 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 30 13:53:59.756943 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 30 13:53:59.756996 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 30 13:53:59.757052 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 30 13:53:59.757102 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 30 13:53:59.757157 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 30 13:53:59.757212 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 30 13:53:59.757263 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 30 13:53:59.757319 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 30 13:53:59.757370 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 30 13:53:59.757420 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 30 13:53:59.757472 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 30 13:53:59.757522 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 30 13:53:59.757572 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 30 13:53:59.757625 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 30 13:53:59.757679 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 30 13:53:59.758754 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 30 13:53:59.758824 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 30 13:53:59.758880 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 30 13:53:59.758932 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 30 13:53:59.758984 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 30 13:53:59.759035 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 30 13:53:59.759086 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 30 13:53:59.759142 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 30 13:53:59.759194 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:53:59.759245 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 30 13:53:59.759298 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 30 13:53:59.759348 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 30 13:53:59.759399 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 30 13:53:59.759452 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 30 13:53:59.759505 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 30 13:53:59.759555 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 30 13:53:59.759606 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 30 13:53:59.759659 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 30 13:53:59.760731 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 30 13:53:59.760794 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 30 13:53:59.760847 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 30 13:53:59.760901 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 30 13:53:59.760957 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 30 13:53:59.761008 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 30 13:53:59.761062 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 30 13:53:59.761113 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 30 13:53:59.761163 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 30 13:53:59.761217 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 30 13:53:59.761268 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 30 13:53:59.761319 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 30 13:53:59.761375 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 30 13:53:59.761426 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 30 13:53:59.761476 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 30 13:53:59.761528 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 30 13:53:59.761578 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 30 13:53:59.761629 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 30 13:53:59.761681 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 30 13:53:59.762796 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 30 13:53:59.762868 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 30 13:53:59.762925 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 30 13:53:59.762979 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 30 13:53:59.763030 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 30 13:53:59.763081 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 30 13:53:59.763132 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 30 13:53:59.763186 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 30 13:53:59.763242 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 30 13:53:59.763293 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 30 13:53:59.763351 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 30 13:53:59.763405 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 30 13:53:59.763456 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 30 13:53:59.763507 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 30 13:53:59.763560 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 30 13:53:59.763611 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 30 13:53:59.763667 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 30 13:53:59.763816 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 30 13:53:59.763873 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 30 13:53:59.763924 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 30 13:53:59.763979 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 30 13:53:59.764030 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 30 13:53:59.764081 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 30 13:53:59.764134 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 30 13:53:59.764190 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 30 13:53:59.764240 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 30 13:53:59.764294 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 30 13:53:59.764346 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 30 13:53:59.764397 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 30 13:53:59.764448 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 30 13:53:59.764502 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 30 13:53:59.764554 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 30 13:53:59.764608 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 30 13:53:59.764659 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 30 13:53:59.764767 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 30 13:53:59.764823 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 30 13:53:59.764873 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 30 13:53:59.764926 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 30 13:53:59.764977 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 30 13:53:59.765027 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 30 13:53:59.765084 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 30 13:53:59.765135 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 30 13:53:59.765185 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 30 13:53:59.765244 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 30 13:53:59.765295 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 30 13:53:59.765346 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 30 13:53:59.765399 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 30 13:53:59.765451 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 30 13:53:59.765504 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 30 13:53:59.765557 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 30 13:53:59.765608 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 30 13:53:59.765658 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 30 13:53:59.765667 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 30 13:53:59.765673 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 30 13:53:59.765680 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 30 13:53:59.765686 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:53:59.765694 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 30 13:53:59.765700 kernel: iommu: Default domain type: Translated Jan 30 13:53:59.765713 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:53:59.765721 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:53:59.765728 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:53:59.765734 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 30 13:53:59.765739 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 30 13:53:59.765796 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 30 13:53:59.765850 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 30 13:53:59.765905 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:53:59.765914 kernel: vgaarb: loaded Jan 30 13:53:59.765921 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 30 13:53:59.765927 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 30 13:53:59.765933 kernel: clocksource: Switched to clocksource tsc-early Jan 30 13:53:59.765939 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:53:59.765946 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:53:59.765952 kernel: pnp: PnP ACPI init Jan 30 13:53:59.766007 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 30 13:53:59.766076 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 30 13:53:59.766124 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 30 13:53:59.766175 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 30 13:53:59.766226 kernel: pnp 00:06: [dma 2] Jan 30 13:53:59.766276 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 30 13:53:59.766323 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 30 13:53:59.766370 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 30 13:53:59.766379 kernel: pnp: PnP ACPI: found 8 devices Jan 30 13:53:59.766385 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:53:59.766391 kernel: NET: Registered PF_INET protocol family Jan 30 13:53:59.766397 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:53:59.766403 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:53:59.766409 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:53:59.766416 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:53:59.766422 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:53:59.766430 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:53:59.766435 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:53:59.766441 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:53:59.766447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:53:59.766454 kernel: NET: Registered PF_XDP protocol family Jan 30 13:53:59.766508 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 30 13:53:59.766562 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 13:53:59.766619 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 13:53:59.766673 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 13:53:59.766838 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 13:53:59.766893 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 30 13:53:59.766946 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 30 13:53:59.766999 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 30 13:53:59.767054 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 30 13:53:59.767106 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 30 13:53:59.767159 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 30 13:53:59.767210 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 30 13:53:59.767262 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 30 13:53:59.767315 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 30 13:53:59.767370 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 30 13:53:59.767422 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 30 13:53:59.767473 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 30 13:53:59.767540 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 30 13:53:59.767593 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 30 13:53:59.767645 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 30 13:53:59.767700 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 30 13:53:59.767768 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 30 13:53:59.767820 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 30 13:53:59.767871 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 30 13:53:59.767922 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 30 13:53:59.767973 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768026 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768077 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768129 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768180 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768241 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768293 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768343 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768394 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768448 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768499 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768549 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768601 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768651 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768702 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768792 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768842 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768896 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768947 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768996 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769047 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769097 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769148 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769198 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769250 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769300 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769355 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769407 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769459 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769509 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769562 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769613 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769665 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769758 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769816 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769868 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769919 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769970 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770021 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770071 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770122 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770173 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770231 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770282 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770333 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770383 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770434 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770485 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770535 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770585 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770636 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770696 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770782 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770833 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770883 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770934 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770984 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771033 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771083 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771133 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771186 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771241 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771291 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771341 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771392 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771442 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771492 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771542 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771593 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771644 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771698 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771770 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771822 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771873 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771923 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771973 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772023 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.772073 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772125 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.772178 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772229 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.772280 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772332 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.772383 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772436 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:53:59.772488 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 30 13:53:59.772539 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 30 13:53:59.772590 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 30 13:53:59.772640 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 30 13:53:59.772699 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 30 13:53:59.772819 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 30 13:53:59.772871 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 30 13:53:59.772922 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 30 13:53:59.772972 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 30 13:53:59.773025 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 30 13:53:59.773075 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 30 13:53:59.773126 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 30 13:53:59.773179 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 30 13:53:59.773232 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 30 13:53:59.773282 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 30 13:53:59.773333 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 30 13:53:59.773382 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 30 13:53:59.773434 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 30 13:53:59.773484 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 30 13:53:59.773534 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 30 13:53:59.773584 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 30 13:53:59.773637 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 30 13:53:59.773687 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 30 13:53:59.773761 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 30 13:53:59.773814 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 30 13:53:59.773864 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 30 13:53:59.773915 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 30 13:53:59.773969 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 30 13:53:59.774020 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 30 13:53:59.774072 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 30 13:53:59.774122 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 30 13:53:59.774173 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 30 13:53:59.774232 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 30 13:53:59.774285 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 30 13:53:59.774335 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 30 13:53:59.774386 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 30 13:53:59.774439 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 30 13:53:59.774491 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 30 13:53:59.774541 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 30 13:53:59.774591 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 30 13:53:59.774642 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 30 13:53:59.774693 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 30 13:53:59.774865 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 30 13:53:59.774916 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 30 13:53:59.774966 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 30 13:53:59.775017 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 30 13:53:59.775071 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 30 13:53:59.775120 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 30 13:53:59.775171 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 30 13:53:59.775221 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 30 13:53:59.775270 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 30 13:53:59.775320 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 30 13:53:59.775370 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 30 13:53:59.775419 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 30 13:53:59.775470 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 30 13:53:59.775522 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 30 13:53:59.775571 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 30 13:53:59.775621 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 30 13:53:59.775672 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 30 13:53:59.775730 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 30 13:53:59.775783 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 30 13:53:59.775833 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 30 13:53:59.775883 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 30 13:53:59.775933 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 30 13:53:59.775984 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 30 13:53:59.776037 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 30 13:53:59.776087 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 30 13:53:59.776137 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 30 13:53:59.776189 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 30 13:53:59.776239 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 30 13:53:59.776288 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 30 13:53:59.776337 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 30 13:53:59.776388 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 30 13:53:59.776438 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 30 13:53:59.776490 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 30 13:53:59.776541 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 30 13:53:59.776591 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 30 13:53:59.776641 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 30 13:53:59.776692 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 30 13:53:59.776778 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 30 13:53:59.776830 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 30 13:53:59.776882 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 30 13:53:59.776933 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 30 13:53:59.776982 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 30 13:53:59.777037 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 30 13:53:59.777088 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 30 13:53:59.777137 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 30 13:53:59.777190 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 30 13:53:59.777245 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 30 13:53:59.777296 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 30 13:53:59.777347 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 30 13:53:59.777400 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 30 13:53:59.777451 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 30 13:53:59.777505 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 30 13:53:59.777555 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 30 13:53:59.777607 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 30 13:53:59.777657 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 30 13:53:59.777714 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 30 13:53:59.777769 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 30 13:53:59.777820 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 30 13:53:59.777870 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 30 13:53:59.777921 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 30 13:53:59.777971 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 30 13:53:59.778025 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 30 13:53:59.778076 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 30 13:53:59.778126 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 30 13:53:59.778176 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 30 13:53:59.778238 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 30 13:53:59.778291 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 30 13:53:59.778342 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 30 13:53:59.778394 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 30 13:53:59.778445 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 30 13:53:59.778499 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 30 13:53:59.778549 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 30 13:53:59.778596 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 30 13:53:59.778641 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 30 13:53:59.778686 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 30 13:53:59.778834 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 30 13:53:59.778886 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 30 13:53:59.778933 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 30 13:53:59.778983 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 30 13:53:59.779029 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 30 13:53:59.779074 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 30 13:53:59.779120 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 30 13:53:59.779166 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 30 13:53:59.779211 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 30 13:53:59.779285 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 30 13:53:59.779340 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 30 13:53:59.779387 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 30 13:53:59.779438 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 30 13:53:59.779485 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 30 13:53:59.779531 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 30 13:53:59.779580 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 30 13:53:59.779627 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 30 13:53:59.779676 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 30 13:53:59.779735 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 30 13:53:59.779783 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 30 13:53:59.779834 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 30 13:53:59.779880 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 30 13:53:59.779931 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 30 13:53:59.779981 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 30 13:53:59.780032 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 30 13:53:59.780079 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 30 13:53:59.780133 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 30 13:53:59.780188 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 30 13:53:59.780241 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 30 13:53:59.780290 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 30 13:53:59.780336 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 30 13:53:59.780403 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 30 13:53:59.780450 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 30 13:53:59.780497 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 30 13:53:59.780547 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 30 13:53:59.780595 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 30 13:53:59.780648 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 30 13:53:59.780699 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 30 13:53:59.782225 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 30 13:53:59.782282 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 30 13:53:59.782331 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 30 13:53:59.782382 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 30 13:53:59.782432 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 30 13:53:59.782483 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 30 13:53:59.782529 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 30 13:53:59.782579 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 30 13:53:59.782627 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 30 13:53:59.782677 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 30 13:53:59.782743 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 30 13:53:59.782793 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 30 13:53:59.782862 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 30 13:53:59.782911 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 30 13:53:59.782958 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 30 13:53:59.783010 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 30 13:53:59.783057 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 30 13:53:59.783107 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 30 13:53:59.783158 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 30 13:53:59.783205 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 30 13:53:59.783256 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 30 13:53:59.783303 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 30 13:53:59.783353 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 30 13:53:59.783404 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 30 13:53:59.783455 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 30 13:53:59.783502 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 30 13:53:59.783552 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 30 13:53:59.783599 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 30 13:53:59.783655 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 30 13:53:59.783705 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 30 13:53:59.783761 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 30 13:53:59.783827 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 30 13:53:59.783876 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 30 13:53:59.783922 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 30 13:53:59.783973 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 30 13:53:59.784023 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 30 13:53:59.784074 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 30 13:53:59.784122 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 30 13:53:59.784173 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 30 13:53:59.784221 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 30 13:53:59.784272 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 30 13:53:59.784319 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 30 13:53:59.784374 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 30 13:53:59.784421 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 30 13:53:59.784472 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 30 13:53:59.784519 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 30 13:53:59.784576 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:53:59.784586 kernel: PCI: CLS 32 bytes, default 64 Jan 30 13:53:59.784594 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:53:59.784601 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 30 13:53:59.784607 kernel: clocksource: Switched to clocksource tsc Jan 30 13:53:59.784613 kernel: Initialise system trusted keyrings Jan 30 13:53:59.784620 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:53:59.784626 kernel: Key type asymmetric registered Jan 30 13:53:59.784633 kernel: Asymmetric key parser 'x509' registered Jan 30 13:53:59.784639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:53:59.784646 kernel: io scheduler mq-deadline registered Jan 30 13:53:59.784653 kernel: io scheduler kyber registered Jan 30 13:53:59.784659 kernel: io scheduler bfq registered Jan 30 13:53:59.785074 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 30 13:53:59.785140 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.785196 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 30 13:53:59.785250 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.785635 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 30 13:53:59.785694 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786136 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 30 13:53:59.786196 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786263 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 30 13:53:59.786317 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786369 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 30 13:53:59.786420 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786477 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 30 13:53:59.786528 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786581 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 30 13:53:59.786632 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786684 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 30 13:53:59.786756 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786810 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 30 13:53:59.786861 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786914 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 30 13:53:59.786966 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787019 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 30 13:53:59.787071 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787129 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 30 13:53:59.787180 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787232 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 30 13:53:59.787284 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787337 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 30 13:53:59.787391 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787445 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 30 13:53:59.787498 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787551 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 30 13:53:59.787603 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787655 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 30 13:53:59.789117 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789187 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 30 13:53:59.789245 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789300 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 30 13:53:59.789354 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789408 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 30 13:53:59.789464 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789517 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 30 13:53:59.789568 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789622 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 30 13:53:59.789674 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789986 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 30 13:53:59.790048 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790102 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 30 13:53:59.790154 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790207 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 30 13:53:59.790257 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790310 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 30 13:53:59.790365 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790416 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 30 13:53:59.790467 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790518 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 30 13:53:59.790570 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790625 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 30 13:53:59.790677 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790781 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 30 13:53:59.790834 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790887 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 30 13:53:59.790938 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790950 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:53:59.790957 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:53:59.790963 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:53:59.790970 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 30 13:53:59.790977 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:53:59.790984 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:53:59.791036 kernel: rtc_cmos 00:01: registered as rtc0 Jan 30 13:53:59.791089 kernel: rtc_cmos 00:01: setting system clock to 2025-01-30T13:53:59 UTC (1738245239) Jan 30 13:53:59.791136 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 30 13:53:59.791145 kernel: intel_pstate: CPU model not supported Jan 30 13:53:59.791151 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:53:59.791158 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:53:59.791164 kernel: Segment Routing with IPv6 Jan 30 13:53:59.791171 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:53:59.791177 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:53:59.791183 kernel: Key type dns_resolver registered Jan 30 13:53:59.791192 kernel: IPI shorthand broadcast: enabled Jan 30 13:53:59.791198 kernel: sched_clock: Marking stable (926108151, 227436044)->(1216052068, -62507873) Jan 30 13:53:59.791209 kernel: registered taskstats version 1 Jan 30 13:53:59.791215 kernel: Loading compiled-in X.509 certificates Jan 30 13:53:59.791222 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:53:59.791228 kernel: Key type .fscrypt registered Jan 30 13:53:59.791234 kernel: Key type fscrypt-provisioning registered Jan 30 13:53:59.791240 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:53:59.791247 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:53:59.791254 kernel: ima: No architecture policies found Jan 30 13:53:59.791261 kernel: clk: Disabling unused clocks Jan 30 13:53:59.791267 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:53:59.791274 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:53:59.791280 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:53:59.791287 kernel: Run /init as init process Jan 30 13:53:59.791293 kernel: with arguments: Jan 30 13:53:59.791299 kernel: /init Jan 30 13:53:59.791306 kernel: with environment: Jan 30 13:53:59.791313 kernel: HOME=/ Jan 30 13:53:59.791319 kernel: TERM=linux Jan 30 13:53:59.791325 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:53:59.791333 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:53:59.791341 systemd[1]: Detected virtualization vmware. Jan 30 13:53:59.791348 systemd[1]: Detected architecture x86-64. Jan 30 13:53:59.791354 systemd[1]: Running in initrd. Jan 30 13:53:59.791360 systemd[1]: No hostname configured, using default hostname. Jan 30 13:53:59.791368 systemd[1]: Hostname set to . Jan 30 13:53:59.791375 systemd[1]: Initializing machine ID from random generator. Jan 30 13:53:59.791381 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:53:59.791387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:53:59.791394 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:53:59.791401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:53:59.791408 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:53:59.791415 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:53:59.791423 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:53:59.791430 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:53:59.791437 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:53:59.791444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:53:59.791450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:53:59.791457 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:53:59.791464 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:53:59.791471 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:53:59.791477 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:53:59.791484 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:53:59.791490 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:53:59.791497 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:53:59.791504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:53:59.791510 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:53:59.791517 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:53:59.791524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:53:59.791531 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:53:59.791538 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:53:59.791545 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:53:59.791551 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:53:59.791558 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:53:59.791564 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:53:59.791571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:53:59.791577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:59.791585 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:53:59.791592 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:53:59.791611 systemd-journald[214]: Collecting audit messages is disabled. Jan 30 13:53:59.791627 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:53:59.791636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:53:59.791643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:53:59.791650 kernel: Bridge firewalling registered Jan 30 13:53:59.791656 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:53:59.791664 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:53:59.791671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:59.791677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:53:59.791684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:59.791690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:53:59.791698 systemd-journald[214]: Journal started Jan 30 13:53:59.791720 systemd-journald[214]: Runtime Journal (/run/log/journal/b6863ceb212a40e38a912c9136e96819) is 4.8M, max 38.6M, 33.8M free. Jan 30 13:53:59.741314 systemd-modules-load[215]: Inserted module 'overlay' Jan 30 13:53:59.761726 systemd-modules-load[215]: Inserted module 'br_netfilter' Jan 30 13:53:59.794270 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:53:59.793908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:53:59.794880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:59.795372 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:59.798790 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:53:59.799822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:53:59.806391 dracut-cmdline[245]: dracut-dracut-053 Jan 30 13:53:59.805757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:53:59.808754 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:59.811846 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:53:59.828865 systemd-resolved[257]: Positive Trust Anchors: Jan 30 13:53:59.828874 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:53:59.828895 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:53:59.830519 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 30 13:53:59.832268 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:53:59.832442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:53:59.855729 kernel: SCSI subsystem initialized Jan 30 13:53:59.861719 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:53:59.868721 kernel: iscsi: registered transport (tcp) Jan 30 13:53:59.880722 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:53:59.880755 kernel: QLogic iSCSI HBA Driver Jan 30 13:53:59.901472 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:53:59.905819 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:53:59.920328 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:53:59.920356 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:53:59.921383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:53:59.951755 kernel: raid6: avx2x4 gen() 52167 MB/s Jan 30 13:53:59.968733 kernel: raid6: avx2x2 gen() 52299 MB/s Jan 30 13:53:59.986025 kernel: raid6: avx2x1 gen() 44093 MB/s Jan 30 13:53:59.986086 kernel: raid6: using algorithm avx2x2 gen() 52299 MB/s Jan 30 13:54:00.003933 kernel: raid6: .... xor() 30358 MB/s, rmw enabled Jan 30 13:54:00.003983 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:54:00.017721 kernel: xor: automatically using best checksumming function avx Jan 30 13:54:00.119739 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:54:00.125252 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:00.128827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:00.136865 systemd-udevd[433]: Using default interface naming scheme 'v255'. Jan 30 13:54:00.139391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:00.152876 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:54:00.160295 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Jan 30 13:54:00.177602 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:00.181814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:54:00.251490 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:00.257083 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:54:00.268768 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:00.269952 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:00.270513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:00.270886 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:54:00.275829 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:54:00.282982 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:00.316718 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 30 13:54:00.327763 kernel: vmw_pvscsi: using 64bit dma Jan 30 13:54:00.330419 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 30 13:54:00.330439 kernel: vmw_pvscsi: max_id: 16 Jan 30 13:54:00.330447 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 30 13:54:00.331670 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 30 13:54:00.340632 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 30 13:54:00.340642 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 30 13:54:00.340650 kernel: vmw_pvscsi: using MSI-X Jan 30 13:54:00.340660 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 30 13:54:00.340778 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 30 13:54:00.340852 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 30 13:54:00.352580 kernel: libata version 3.00 loaded. Jan 30 13:54:00.352591 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 30 13:54:00.352665 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 30 13:54:00.356518 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:54:00.357666 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 30 13:54:00.357775 kernel: scsi host1: ata_piix Jan 30 13:54:00.357849 kernel: scsi host2: ata_piix Jan 30 13:54:00.357909 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 30 13:54:00.357918 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 30 13:54:00.359454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:00.359524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:00.359806 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:00.359901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:00.359989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:00.360093 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:00.365033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:00.375487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:00.379796 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:00.390102 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:00.524724 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 30 13:54:00.529730 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 30 13:54:00.537732 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:54:00.537758 kernel: AES CTR mode by8 optimization enabled Jan 30 13:54:00.549355 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 30 13:54:00.554115 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:54:00.554190 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 30 13:54:00.554253 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 30 13:54:00.554312 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 30 13:54:00.554371 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:00.554380 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:54:00.557194 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 30 13:54:00.572838 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:54:00.572858 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:54:00.602769 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (487) Jan 30 13:54:00.603762 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 30 13:54:00.610730 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (479) Jan 30 13:54:00.608916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 30 13:54:00.611656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 30 13:54:00.614153 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 30 13:54:00.614423 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 30 13:54:00.619880 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:54:00.649065 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:00.654340 kernel: GPT:disk_guids don't match. Jan 30 13:54:00.654372 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:54:00.654384 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:01.658687 disk-uuid[588]: The operation has completed successfully. Jan 30 13:54:01.658918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:01.717218 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:54:01.717277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:54:01.721898 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:54:01.724032 sh[608]: Success Jan 30 13:54:01.732728 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:54:01.770183 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:54:01.774784 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:54:01.775856 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:54:01.816112 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:54:01.816162 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:01.816176 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:54:01.817363 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:54:01.818281 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:54:01.826735 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:54:01.828787 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:54:01.838901 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 30 13:54:01.840475 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:54:01.864987 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:01.865032 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:01.865043 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:01.894729 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:01.903773 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:54:01.904742 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:01.907999 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:54:01.913928 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:54:01.951672 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 30 13:54:01.955824 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:54:02.013280 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:02.018830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:54:02.024726 ignition[669]: Ignition 2.19.0 Jan 30 13:54:02.024735 ignition[669]: Stage: fetch-offline Jan 30 13:54:02.024768 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.024777 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.024835 ignition[669]: parsed url from cmdline: "" Jan 30 13:54:02.024837 ignition[669]: no config URL provided Jan 30 13:54:02.024840 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:54:02.024845 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:54:02.025214 ignition[669]: config successfully fetched Jan 30 13:54:02.025235 ignition[669]: parsing config with SHA512: 4af82fa843ada1bd79c80ead3cd5ad6f1b880800c9e1f6be5d89f2bce69dc9e350aba162a2bb03cfe5607c48e84446756e836545ad771f530b3c915107bbc6ed Jan 30 13:54:02.029041 unknown[669]: fetched base config from "system" Jan 30 13:54:02.029285 ignition[669]: fetch-offline: fetch-offline passed Jan 30 13:54:02.029048 unknown[669]: fetched user config from "vmware" Jan 30 13:54:02.029321 ignition[669]: Ignition finished successfully Jan 30 13:54:02.030908 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:02.035317 systemd-networkd[802]: lo: Link UP Jan 30 13:54:02.035324 systemd-networkd[802]: lo: Gained carrier Jan 30 13:54:02.036059 systemd-networkd[802]: Enumeration completed Jan 30 13:54:02.036334 systemd-networkd[802]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 30 13:54:02.036517 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:54:02.036675 systemd[1]: Reached target network.target - Network. Jan 30 13:54:02.036944 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:54:02.040353 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 30 13:54:02.040496 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 30 13:54:02.041060 systemd-networkd[802]: ens192: Link UP Jan 30 13:54:02.041065 systemd-networkd[802]: ens192: Gained carrier Jan 30 13:54:02.046791 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:54:02.055035 ignition[805]: Ignition 2.19.0 Jan 30 13:54:02.055041 ignition[805]: Stage: kargs Jan 30 13:54:02.055411 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.055420 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.056027 ignition[805]: kargs: kargs passed Jan 30 13:54:02.056053 ignition[805]: Ignition finished successfully Jan 30 13:54:02.057256 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:54:02.061863 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:54:02.069885 ignition[812]: Ignition 2.19.0 Jan 30 13:54:02.069892 ignition[812]: Stage: disks Jan 30 13:54:02.069994 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.070000 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.070554 ignition[812]: disks: disks passed Jan 30 13:54:02.070579 ignition[812]: Ignition finished successfully Jan 30 13:54:02.071316 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:54:02.071839 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:02.072099 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:54:02.072336 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:54:02.072558 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:54:02.072775 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:54:02.076790 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:54:02.090166 systemd-fsck[820]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:54:02.091686 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:54:02.096803 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:54:02.160328 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:54:02.160776 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:54:02.160722 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:54:02.165777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:02.167310 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:54:02.167748 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:54:02.167789 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:54:02.167807 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:02.171685 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:54:02.172908 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:54:02.175847 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (828) Jan 30 13:54:02.178992 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:02.179023 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:02.179032 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:02.183734 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:02.185611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:02.262076 initrd-setup-root[852]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:54:02.265046 initrd-setup-root[859]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:54:02.267519 initrd-setup-root[866]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:54:02.270285 initrd-setup-root[873]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:54:02.366012 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:02.370809 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:54:02.373504 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:54:02.378724 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:02.396653 ignition[940]: INFO : Ignition 2.19.0 Jan 30 13:54:02.396653 ignition[940]: INFO : Stage: mount Jan 30 13:54:02.397036 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.397036 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.397299 ignition[940]: INFO : mount: mount passed Jan 30 13:54:02.397434 ignition[940]: INFO : Ignition finished successfully Jan 30 13:54:02.398004 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:54:02.402829 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:54:02.424800 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:54:02.814269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:54:02.819865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:02.830727 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (952) Jan 30 13:54:02.842467 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:02.842507 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:02.842516 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:02.859722 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:02.860235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:02.873640 ignition[969]: INFO : Ignition 2.19.0 Jan 30 13:54:02.873640 ignition[969]: INFO : Stage: files Jan 30 13:54:02.874021 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.874021 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.874431 ignition[969]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:54:02.875156 ignition[969]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:54:02.875156 ignition[969]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:54:02.877504 ignition[969]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:54:02.877676 ignition[969]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:54:02.877819 ignition[969]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:54:02.877790 unknown[969]: wrote ssh authorized keys file for user: core Jan 30 13:54:02.879522 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:54:02.879821 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:54:03.445979 systemd-networkd[802]: ens192: Gained IPv6LL Jan 30 13:54:07.915217 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:54:08.008362 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:54:08.008362 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:54:08.503412 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:54:08.699394 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:08.699756 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 30 13:54:08.699756 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 30 13:54:08.699756 ignition[969]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:54:08.708929 ignition[969]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:54:09.007268 ignition[969]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:09.010361 ignition[969]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:09.010361 ignition[969]: INFO : files: files passed Jan 30 13:54:09.010361 ignition[969]: INFO : Ignition finished successfully Jan 30 13:54:09.011660 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:54:09.015850 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:54:09.017152 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:54:09.022172 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:54:09.022382 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:54:09.025174 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:09.025548 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:09.026327 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:09.027177 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:09.027683 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:54:09.039929 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:54:09.052870 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:54:09.052934 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:54:09.053394 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:54:09.053500 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:54:09.053760 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:54:09.054227 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:54:09.065561 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:09.070858 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:54:09.076803 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:09.077099 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:09.077268 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:54:09.077397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:54:09.077473 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:09.077703 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:54:09.077925 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:54:09.078100 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:54:09.078313 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:09.078526 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:09.078738 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:54:09.078958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:09.079331 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:54:09.079509 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:54:09.079701 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:54:09.079936 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:54:09.080008 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:09.080291 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:09.080540 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:09.080748 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:54:09.080796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:09.080941 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:54:09.081005 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:09.081251 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:54:09.081314 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:09.081569 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:54:09.081688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:54:09.086775 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:09.086955 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:54:09.087152 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:54:09.087336 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:54:09.087405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:54:09.087614 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:54:09.087658 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:54:09.087930 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:54:09.088013 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:09.088283 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:54:09.088359 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:54:09.092881 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:54:09.094892 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:54:09.095154 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:54:09.095279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:09.095538 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:54:09.095596 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:09.098538 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:54:09.098602 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:54:09.102820 ignition[1024]: INFO : Ignition 2.19.0 Jan 30 13:54:09.105383 ignition[1024]: INFO : Stage: umount Jan 30 13:54:09.105383 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:09.105383 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:09.105383 ignition[1024]: INFO : umount: umount passed Jan 30 13:54:09.105383 ignition[1024]: INFO : Ignition finished successfully Jan 30 13:54:09.104255 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:54:09.104323 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:54:09.104729 systemd[1]: Stopped target network.target - Network. Jan 30 13:54:09.104912 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:54:09.104943 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:54:09.105115 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:54:09.105145 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:54:09.105373 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:54:09.105445 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:54:09.105615 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:54:09.105673 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:54:09.106050 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:54:09.106377 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:54:09.110016 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:54:09.110091 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:54:09.110745 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:54:09.110768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:09.113819 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:54:09.114048 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:54:09.114084 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:09.114338 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 30 13:54:09.114360 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 30 13:54:09.115356 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:09.117109 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:54:09.117329 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:54:09.119145 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:54:09.119468 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:09.120077 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:54:09.120326 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:09.120591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:54:09.120761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:09.120979 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:54:09.121002 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:09.121414 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:54:09.121437 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:54:09.121698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:09.121728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:09.128816 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:54:09.129072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:54:09.129104 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:09.129364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:54:09.129386 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:09.129635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:54:09.129657 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:09.130042 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:54:09.130064 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:54:09.130605 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:54:09.130627 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:09.130891 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:54:09.130912 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:09.131188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:09.131216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:09.136057 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:54:09.137322 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:54:09.137532 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:54:09.138254 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:54:09.138449 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:54:09.369864 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:54:09.369928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:54:09.370400 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:54:09.370532 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:54:09.370562 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:09.373818 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:54:09.390409 systemd[1]: Switching root. Jan 30 13:54:09.429557 systemd-journald[214]: Journal stopped Jan 30 13:53:59.735050 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 10:09:32 -00 2025 Jan 30 13:53:59.735066 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:59.735072 kernel: Disabled fast string operations Jan 30 13:53:59.735076 kernel: BIOS-provided physical RAM map: Jan 30 13:53:59.735080 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 30 13:53:59.735084 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 30 13:53:59.735090 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 30 13:53:59.735094 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 30 13:53:59.735099 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 30 13:53:59.735103 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 30 13:53:59.735107 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 30 13:53:59.735111 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 30 13:53:59.735115 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 30 13:53:59.735119 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 30 13:53:59.735125 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 30 13:53:59.735130 kernel: NX (Execute Disable) protection: active Jan 30 13:53:59.735135 kernel: APIC: Static calls initialized Jan 30 13:53:59.735139 kernel: SMBIOS 2.7 present. Jan 30 13:53:59.735144 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 30 13:53:59.735148 kernel: vmware: hypercall mode: 0x00 Jan 30 13:53:59.735153 kernel: Hypervisor detected: VMware Jan 30 13:53:59.735158 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 30 13:53:59.735164 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 30 13:53:59.735168 kernel: vmware: using clock offset of 2572950796 ns Jan 30 13:53:59.735173 kernel: tsc: Detected 3408.000 MHz processor Jan 30 13:53:59.735178 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 30 13:53:59.735183 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 30 13:53:59.735188 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 30 13:53:59.735192 kernel: total RAM covered: 3072M Jan 30 13:53:59.735197 kernel: Found optimal setting for mtrr clean up Jan 30 13:53:59.735202 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 30 13:53:59.735208 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 30 13:53:59.735213 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 30 13:53:59.735218 kernel: Using GB pages for direct mapping Jan 30 13:53:59.735222 kernel: ACPI: Early table checksum verification disabled Jan 30 13:53:59.735227 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 30 13:53:59.735232 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 30 13:53:59.735236 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 30 13:53:59.735241 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 30 13:53:59.735246 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 30 13:53:59.735254 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 30 13:53:59.735258 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 30 13:53:59.735264 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 30 13:53:59.735269 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 30 13:53:59.735274 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 30 13:53:59.735280 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 30 13:53:59.735285 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 30 13:53:59.735290 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 30 13:53:59.735295 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 30 13:53:59.735300 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 30 13:53:59.735305 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 30 13:53:59.735310 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 30 13:53:59.735315 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 30 13:53:59.735320 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 30 13:53:59.735325 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 30 13:53:59.735331 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 30 13:53:59.735336 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 30 13:53:59.735341 kernel: system APIC only can use physical flat Jan 30 13:53:59.735345 kernel: APIC: Switched APIC routing to: physical flat Jan 30 13:53:59.735350 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 30 13:53:59.735356 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 30 13:53:59.735361 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 30 13:53:59.735365 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 30 13:53:59.735370 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 30 13:53:59.735376 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 30 13:53:59.735381 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 30 13:53:59.735386 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 30 13:53:59.735391 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 30 13:53:59.735396 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 30 13:53:59.735401 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 30 13:53:59.735406 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 30 13:53:59.735410 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 30 13:53:59.735415 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 30 13:53:59.735420 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 30 13:53:59.735426 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 30 13:53:59.735431 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 30 13:53:59.735436 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 30 13:53:59.735441 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 30 13:53:59.735445 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 30 13:53:59.735450 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 30 13:53:59.735455 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 30 13:53:59.735460 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 30 13:53:59.735465 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 30 13:53:59.735470 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 30 13:53:59.735475 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 30 13:53:59.735481 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 30 13:53:59.735486 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 30 13:53:59.735490 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 30 13:53:59.735495 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 30 13:53:59.735500 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 30 13:53:59.735505 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 30 13:53:59.735510 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 30 13:53:59.735515 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 30 13:53:59.735520 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 30 13:53:59.735524 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 30 13:53:59.735531 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 30 13:53:59.735536 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 30 13:53:59.735541 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 30 13:53:59.735545 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 30 13:53:59.735550 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 30 13:53:59.735555 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 30 13:53:59.735560 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 30 13:53:59.735565 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 30 13:53:59.735569 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 30 13:53:59.735574 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 30 13:53:59.735580 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 30 13:53:59.735585 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 30 13:53:59.735590 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 30 13:53:59.735595 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 30 13:53:59.735600 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 30 13:53:59.735605 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 30 13:53:59.735610 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 30 13:53:59.735615 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 30 13:53:59.735620 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 30 13:53:59.735624 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 30 13:53:59.735631 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 30 13:53:59.735635 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 30 13:53:59.735640 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 30 13:53:59.735649 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 30 13:53:59.735655 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 30 13:53:59.735660 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 30 13:53:59.735666 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 30 13:53:59.735671 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 30 13:53:59.735676 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 30 13:53:59.735683 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 30 13:53:59.735688 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 30 13:53:59.735693 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 30 13:53:59.735698 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 30 13:53:59.735704 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 30 13:53:59.735756 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 30 13:53:59.735762 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 30 13:53:59.735767 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 30 13:53:59.735772 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 30 13:53:59.735778 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 30 13:53:59.735785 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 30 13:53:59.735790 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 30 13:53:59.735796 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 30 13:53:59.735801 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 30 13:53:59.735807 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 30 13:53:59.735812 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 30 13:53:59.735817 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 30 13:53:59.735822 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 30 13:53:59.735827 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 30 13:53:59.735833 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 30 13:53:59.735839 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 30 13:53:59.735844 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 30 13:53:59.735850 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 30 13:53:59.735855 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 30 13:53:59.735860 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 30 13:53:59.735865 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 30 13:53:59.735870 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 30 13:53:59.735875 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 30 13:53:59.735881 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 30 13:53:59.735886 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 30 13:53:59.735892 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 30 13:53:59.735898 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 30 13:53:59.735903 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 30 13:53:59.735908 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 30 13:53:59.735913 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 30 13:53:59.735919 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 30 13:53:59.735924 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 30 13:53:59.735929 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 30 13:53:59.735934 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 30 13:53:59.735940 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 30 13:53:59.735946 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 30 13:53:59.735951 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 30 13:53:59.735956 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 30 13:53:59.735961 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 30 13:53:59.735967 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 30 13:53:59.735972 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 30 13:53:59.735977 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 30 13:53:59.735982 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 30 13:53:59.735987 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 30 13:53:59.735992 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 30 13:53:59.735998 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 30 13:53:59.736004 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 30 13:53:59.736009 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 30 13:53:59.736015 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 30 13:53:59.736020 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 30 13:53:59.736025 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 30 13:53:59.736030 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 30 13:53:59.736035 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 30 13:53:59.736041 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 30 13:53:59.736046 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 30 13:53:59.736051 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 30 13:53:59.736058 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 30 13:53:59.736063 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 30 13:53:59.736069 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 30 13:53:59.736074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 30 13:53:59.736079 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 30 13:53:59.736085 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 30 13:53:59.736090 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 30 13:53:59.736096 kernel: Zone ranges: Jan 30 13:53:59.736101 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 30 13:53:59.736108 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 30 13:53:59.736113 kernel: Normal empty Jan 30 13:53:59.736118 kernel: Movable zone start for each node Jan 30 13:53:59.736124 kernel: Early memory node ranges Jan 30 13:53:59.736129 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 30 13:53:59.736134 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 30 13:53:59.736140 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 30 13:53:59.736145 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 30 13:53:59.736150 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 30 13:53:59.736155 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 30 13:53:59.736162 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 30 13:53:59.736167 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 30 13:53:59.736173 kernel: system APIC only can use physical flat Jan 30 13:53:59.736178 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 30 13:53:59.736184 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 30 13:53:59.736189 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 30 13:53:59.736194 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 30 13:53:59.736200 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 30 13:53:59.736205 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 30 13:53:59.736212 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 30 13:53:59.736217 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 30 13:53:59.736222 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 30 13:53:59.736228 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 30 13:53:59.736233 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 30 13:53:59.736238 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 30 13:53:59.736243 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 30 13:53:59.736248 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 30 13:53:59.736254 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 30 13:53:59.736259 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 30 13:53:59.736265 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 30 13:53:59.736271 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 30 13:53:59.736276 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 30 13:53:59.736281 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 30 13:53:59.736286 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 30 13:53:59.736292 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 30 13:53:59.736297 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 30 13:53:59.736302 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 30 13:53:59.736308 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 30 13:53:59.736313 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 30 13:53:59.736319 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 30 13:53:59.736325 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 30 13:53:59.736330 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 30 13:53:59.736335 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 30 13:53:59.736340 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 30 13:53:59.736346 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 30 13:53:59.736351 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 30 13:53:59.736356 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 30 13:53:59.736361 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 30 13:53:59.736368 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 30 13:53:59.736373 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 30 13:53:59.736378 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 30 13:53:59.736384 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 30 13:53:59.736389 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 30 13:53:59.736394 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 30 13:53:59.736400 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 30 13:53:59.736405 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 30 13:53:59.736410 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 30 13:53:59.736416 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 30 13:53:59.736422 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 30 13:53:59.736427 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 30 13:53:59.736433 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 30 13:53:59.736438 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 30 13:53:59.736443 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 30 13:53:59.736449 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 30 13:53:59.736454 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 30 13:53:59.736460 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 30 13:53:59.736465 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 30 13:53:59.736470 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 30 13:53:59.736476 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 30 13:53:59.736482 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 30 13:53:59.736487 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 30 13:53:59.736492 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 30 13:53:59.736497 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 30 13:53:59.736503 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 30 13:53:59.736508 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 30 13:53:59.736513 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 30 13:53:59.736518 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 30 13:53:59.736523 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 30 13:53:59.736530 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 30 13:53:59.736535 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 30 13:53:59.736541 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 30 13:53:59.736546 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 30 13:53:59.736552 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 30 13:53:59.736557 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 30 13:53:59.736562 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 30 13:53:59.736568 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 30 13:53:59.736573 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 30 13:53:59.736579 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 30 13:53:59.736585 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 30 13:53:59.736590 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 30 13:53:59.736595 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 30 13:53:59.736601 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 30 13:53:59.736606 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 30 13:53:59.736611 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 30 13:53:59.736616 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 30 13:53:59.736622 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 30 13:53:59.736627 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 30 13:53:59.736634 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 30 13:53:59.736639 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 30 13:53:59.736644 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 30 13:53:59.736649 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 30 13:53:59.736654 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 30 13:53:59.736660 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 30 13:53:59.736665 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 30 13:53:59.736671 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 30 13:53:59.736676 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 30 13:53:59.736681 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 30 13:53:59.736687 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 30 13:53:59.736693 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 30 13:53:59.736698 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 30 13:53:59.736703 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 30 13:53:59.736723 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 30 13:53:59.736729 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 30 13:53:59.736735 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 30 13:53:59.736740 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 30 13:53:59.736745 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 30 13:53:59.736753 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 30 13:53:59.736759 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 30 13:53:59.736764 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 30 13:53:59.736769 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 30 13:53:59.736774 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 30 13:53:59.736779 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 30 13:53:59.736785 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 30 13:53:59.736790 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 30 13:53:59.736795 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 30 13:53:59.736801 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 30 13:53:59.736807 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 30 13:53:59.736812 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 30 13:53:59.736817 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 30 13:53:59.736823 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 30 13:53:59.736828 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 30 13:53:59.736833 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 30 13:53:59.736839 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 30 13:53:59.736844 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 30 13:53:59.736849 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 30 13:53:59.736856 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 30 13:53:59.736861 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 30 13:53:59.736866 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 30 13:53:59.736872 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 30 13:53:59.736877 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 30 13:53:59.736882 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 30 13:53:59.736887 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 30 13:53:59.736893 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 30 13:53:59.736898 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 30 13:53:59.736904 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 30 13:53:59.736910 kernel: TSC deadline timer available Jan 30 13:53:59.736915 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 30 13:53:59.736921 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 30 13:53:59.736926 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 30 13:53:59.736932 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 30 13:53:59.736937 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 30 13:53:59.736943 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 30 13:53:59.736948 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 30 13:53:59.736954 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 30 13:53:59.736960 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 30 13:53:59.736966 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 30 13:53:59.736971 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 30 13:53:59.736977 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 30 13:53:59.736990 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 30 13:53:59.736997 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 30 13:53:59.737002 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 30 13:53:59.737008 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 30 13:53:59.737013 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 30 13:53:59.737020 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 30 13:53:59.737026 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 30 13:53:59.737031 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 30 13:53:59.737037 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 30 13:53:59.737042 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 30 13:53:59.737048 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 30 13:53:59.737054 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:59.737060 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 30 13:53:59.737067 kernel: random: crng init done Jan 30 13:53:59.737073 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 30 13:53:59.737078 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 30 13:53:59.737084 kernel: printk: log_buf_len min size: 262144 bytes Jan 30 13:53:59.737090 kernel: printk: log_buf_len: 1048576 bytes Jan 30 13:53:59.737095 kernel: printk: early log buf free: 239648(91%) Jan 30 13:53:59.737101 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 30 13:53:59.737107 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 30 13:53:59.737112 kernel: Fallback order for Node 0: 0 Jan 30 13:53:59.737119 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 30 13:53:59.737125 kernel: Policy zone: DMA32 Jan 30 13:53:59.737131 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 30 13:53:59.737137 kernel: Memory: 1936388K/2096628K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42844K init, 2348K bss, 159980K reserved, 0K cma-reserved) Jan 30 13:53:59.737144 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 30 13:53:59.737151 kernel: ftrace: allocating 37921 entries in 149 pages Jan 30 13:53:59.737156 kernel: ftrace: allocated 149 pages with 4 groups Jan 30 13:53:59.737162 kernel: Dynamic Preempt: voluntary Jan 30 13:53:59.737168 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 30 13:53:59.737174 kernel: rcu: RCU event tracing is enabled. Jan 30 13:53:59.737180 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 30 13:53:59.737185 kernel: Trampoline variant of Tasks RCU enabled. Jan 30 13:53:59.737191 kernel: Rude variant of Tasks RCU enabled. Jan 30 13:53:59.737197 kernel: Tracing variant of Tasks RCU enabled. Jan 30 13:53:59.737202 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 30 13:53:59.737213 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 30 13:53:59.737219 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 30 13:53:59.737225 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 30 13:53:59.737231 kernel: Console: colour VGA+ 80x25 Jan 30 13:53:59.737236 kernel: printk: console [tty0] enabled Jan 30 13:53:59.737242 kernel: printk: console [ttyS0] enabled Jan 30 13:53:59.737248 kernel: ACPI: Core revision 20230628 Jan 30 13:53:59.737253 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 30 13:53:59.737259 kernel: APIC: Switch to symmetric I/O mode setup Jan 30 13:53:59.737266 kernel: x2apic enabled Jan 30 13:53:59.737272 kernel: APIC: Switched APIC routing to: physical x2apic Jan 30 13:53:59.737278 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 30 13:53:59.737284 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 30 13:53:59.737289 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 30 13:53:59.737295 kernel: Disabled fast string operations Jan 30 13:53:59.737300 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 30 13:53:59.737307 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 30 13:53:59.737313 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 30 13:53:59.737320 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 30 13:53:59.737326 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 30 13:53:59.737332 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 30 13:53:59.737337 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 30 13:53:59.737343 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 30 13:53:59.737349 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 30 13:53:59.737355 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 30 13:53:59.737360 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 30 13:53:59.737366 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 30 13:53:59.737373 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 30 13:53:59.737379 kernel: GDS: Unknown: Dependent on hypervisor status Jan 30 13:53:59.737385 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 30 13:53:59.737390 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 30 13:53:59.737396 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 30 13:53:59.737402 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 30 13:53:59.737408 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 30 13:53:59.737414 kernel: Freeing SMP alternatives memory: 32K Jan 30 13:53:59.737419 kernel: pid_max: default: 131072 minimum: 1024 Jan 30 13:53:59.737426 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 30 13:53:59.737432 kernel: landlock: Up and running. Jan 30 13:53:59.737438 kernel: SELinux: Initializing. Jan 30 13:53:59.737444 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:53:59.737450 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 30 13:53:59.737455 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 30 13:53:59.737461 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 30 13:53:59.737467 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 30 13:53:59.737473 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 30 13:53:59.737479 kernel: Performance Events: Skylake events, core PMU driver. Jan 30 13:53:59.737485 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 30 13:53:59.737491 kernel: core: CPUID marked event: 'instructions' unavailable Jan 30 13:53:59.737496 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 30 13:53:59.737502 kernel: core: CPUID marked event: 'cache references' unavailable Jan 30 13:53:59.737507 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 30 13:53:59.737513 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 30 13:53:59.737518 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 30 13:53:59.737525 kernel: ... version: 1 Jan 30 13:53:59.737531 kernel: ... bit width: 48 Jan 30 13:53:59.737537 kernel: ... generic registers: 4 Jan 30 13:53:59.737543 kernel: ... value mask: 0000ffffffffffff Jan 30 13:53:59.737548 kernel: ... max period: 000000007fffffff Jan 30 13:53:59.737554 kernel: ... fixed-purpose events: 0 Jan 30 13:53:59.737560 kernel: ... event mask: 000000000000000f Jan 30 13:53:59.737565 kernel: signal: max sigframe size: 1776 Jan 30 13:53:59.737571 kernel: rcu: Hierarchical SRCU implementation. Jan 30 13:53:59.737578 kernel: rcu: Max phase no-delay instances is 400. Jan 30 13:53:59.737584 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 30 13:53:59.737590 kernel: smp: Bringing up secondary CPUs ... Jan 30 13:53:59.737595 kernel: smpboot: x86: Booting SMP configuration: Jan 30 13:53:59.737620 kernel: .... node #0, CPUs: #1 Jan 30 13:53:59.737626 kernel: Disabled fast string operations Jan 30 13:53:59.737631 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 30 13:53:59.737637 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 30 13:53:59.737667 kernel: smp: Brought up 1 node, 2 CPUs Jan 30 13:53:59.737673 kernel: smpboot: Max logical packages: 128 Jan 30 13:53:59.737680 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 30 13:53:59.737686 kernel: devtmpfs: initialized Jan 30 13:53:59.737691 kernel: x86/mm: Memory block size: 128MB Jan 30 13:53:59.737698 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 30 13:53:59.737704 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 30 13:53:59.737725 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 30 13:53:59.737731 kernel: pinctrl core: initialized pinctrl subsystem Jan 30 13:53:59.737737 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 30 13:53:59.737743 kernel: audit: initializing netlink subsys (disabled) Jan 30 13:53:59.737751 kernel: audit: type=2000 audit(1738245238.067:1): state=initialized audit_enabled=0 res=1 Jan 30 13:53:59.737756 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 30 13:53:59.737762 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 30 13:53:59.737767 kernel: cpuidle: using governor menu Jan 30 13:53:59.737774 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 30 13:53:59.737780 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 30 13:53:59.737785 kernel: dca service started, version 1.12.1 Jan 30 13:53:59.737791 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 30 13:53:59.737797 kernel: PCI: Using configuration type 1 for base access Jan 30 13:53:59.737804 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 30 13:53:59.737809 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 30 13:53:59.737815 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 30 13:53:59.737821 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 30 13:53:59.737827 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 30 13:53:59.737832 kernel: ACPI: Added _OSI(Module Device) Jan 30 13:53:59.737838 kernel: ACPI: Added _OSI(Processor Device) Jan 30 13:53:59.737844 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 30 13:53:59.737850 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 30 13:53:59.737856 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 30 13:53:59.737863 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 30 13:53:59.737868 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 30 13:53:59.737874 kernel: ACPI: Interpreter enabled Jan 30 13:53:59.737880 kernel: ACPI: PM: (supports S0 S1 S5) Jan 30 13:53:59.737886 kernel: ACPI: Using IOAPIC for interrupt routing Jan 30 13:53:59.737892 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 30 13:53:59.737898 kernel: PCI: Using E820 reservations for host bridge windows Jan 30 13:53:59.737903 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 30 13:53:59.737910 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 30 13:53:59.737993 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 30 13:53:59.738051 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 30 13:53:59.738100 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 30 13:53:59.738109 kernel: PCI host bridge to bus 0000:00 Jan 30 13:53:59.738158 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 30 13:53:59.738229 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 30 13:53:59.738276 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 30 13:53:59.738319 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 30 13:53:59.738364 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 30 13:53:59.738407 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 30 13:53:59.738468 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 30 13:53:59.738540 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 30 13:53:59.738646 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 30 13:53:59.739008 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 30 13:53:59.739074 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 30 13:53:59.739127 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 30 13:53:59.739179 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 30 13:53:59.739231 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 30 13:53:59.739287 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 30 13:53:59.739346 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 30 13:53:59.739399 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 30 13:53:59.739453 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 30 13:53:59.739509 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 30 13:53:59.739562 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 30 13:53:59.739616 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 30 13:53:59.739670 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 30 13:53:59.739915 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 30 13:53:59.739971 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 30 13:53:59.740022 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 30 13:53:59.740071 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 30 13:53:59.740121 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 30 13:53:59.740177 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 30 13:53:59.740239 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.740294 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.742913 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743006 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743068 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743122 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743186 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743247 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743306 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743358 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743413 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743466 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743526 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743577 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.743632 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.743685 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.745861 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.745950 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746047 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746103 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746161 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746214 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746272 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746327 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746382 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746434 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746490 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746541 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.746597 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.746648 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.749724 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.749810 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.749872 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.749925 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750003 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750060 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750118 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750168 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750242 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750310 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750364 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750414 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750471 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750521 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750575 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750643 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750698 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750793 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750854 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.750904 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.750960 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751010 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751100 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751152 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751207 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751262 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751316 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751367 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751422 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751473 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751527 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751582 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.751637 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 30 13:53:59.751688 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.753129 kernel: pci_bus 0000:01: extended config space not accessible Jan 30 13:53:59.753191 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:53:59.753253 kernel: pci_bus 0000:02: extended config space not accessible Jan 30 13:53:59.753265 kernel: acpiphp: Slot [32] registered Jan 30 13:53:59.753272 kernel: acpiphp: Slot [33] registered Jan 30 13:53:59.753278 kernel: acpiphp: Slot [34] registered Jan 30 13:53:59.753284 kernel: acpiphp: Slot [35] registered Jan 30 13:53:59.753290 kernel: acpiphp: Slot [36] registered Jan 30 13:53:59.753295 kernel: acpiphp: Slot [37] registered Jan 30 13:53:59.753301 kernel: acpiphp: Slot [38] registered Jan 30 13:53:59.753307 kernel: acpiphp: Slot [39] registered Jan 30 13:53:59.753313 kernel: acpiphp: Slot [40] registered Jan 30 13:53:59.753321 kernel: acpiphp: Slot [41] registered Jan 30 13:53:59.753326 kernel: acpiphp: Slot [42] registered Jan 30 13:53:59.753332 kernel: acpiphp: Slot [43] registered Jan 30 13:53:59.753338 kernel: acpiphp: Slot [44] registered Jan 30 13:53:59.753344 kernel: acpiphp: Slot [45] registered Jan 30 13:53:59.753350 kernel: acpiphp: Slot [46] registered Jan 30 13:53:59.753356 kernel: acpiphp: Slot [47] registered Jan 30 13:53:59.753362 kernel: acpiphp: Slot [48] registered Jan 30 13:53:59.753368 kernel: acpiphp: Slot [49] registered Jan 30 13:53:59.753374 kernel: acpiphp: Slot [50] registered Jan 30 13:53:59.753381 kernel: acpiphp: Slot [51] registered Jan 30 13:53:59.753387 kernel: acpiphp: Slot [52] registered Jan 30 13:53:59.753394 kernel: acpiphp: Slot [53] registered Jan 30 13:53:59.753399 kernel: acpiphp: Slot [54] registered Jan 30 13:53:59.753405 kernel: acpiphp: Slot [55] registered Jan 30 13:53:59.753411 kernel: acpiphp: Slot [56] registered Jan 30 13:53:59.753417 kernel: acpiphp: Slot [57] registered Jan 30 13:53:59.753423 kernel: acpiphp: Slot [58] registered Jan 30 13:53:59.753429 kernel: acpiphp: Slot [59] registered Jan 30 13:53:59.753436 kernel: acpiphp: Slot [60] registered Jan 30 13:53:59.753441 kernel: acpiphp: Slot [61] registered Jan 30 13:53:59.753447 kernel: acpiphp: Slot [62] registered Jan 30 13:53:59.753453 kernel: acpiphp: Slot [63] registered Jan 30 13:53:59.753506 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 30 13:53:59.753557 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 30 13:53:59.753608 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 30 13:53:59.753657 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 30 13:53:59.754751 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 30 13:53:59.754817 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 30 13:53:59.754868 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 30 13:53:59.754919 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 30 13:53:59.754992 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 30 13:53:59.755078 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 30 13:53:59.755138 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 30 13:53:59.755190 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 30 13:53:59.755246 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 30 13:53:59.755298 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 30 13:53:59.755351 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 30 13:53:59.755406 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 30 13:53:59.755457 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 30 13:53:59.755507 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 30 13:53:59.755561 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 30 13:53:59.755615 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 30 13:53:59.755665 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 30 13:53:59.756730 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 30 13:53:59.756790 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 30 13:53:59.756841 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 30 13:53:59.756892 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 30 13:53:59.756943 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 30 13:53:59.756996 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 30 13:53:59.757052 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 30 13:53:59.757102 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 30 13:53:59.757157 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 30 13:53:59.757212 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 30 13:53:59.757263 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 30 13:53:59.757319 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 30 13:53:59.757370 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 30 13:53:59.757420 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 30 13:53:59.757472 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 30 13:53:59.757522 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 30 13:53:59.757572 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 30 13:53:59.757625 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 30 13:53:59.757679 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 30 13:53:59.758754 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 30 13:53:59.758824 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 30 13:53:59.758880 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 30 13:53:59.758932 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 30 13:53:59.758984 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 30 13:53:59.759035 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 30 13:53:59.759086 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 30 13:53:59.759142 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 30 13:53:59.759194 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 30 13:53:59.759245 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 30 13:53:59.759298 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 30 13:53:59.759348 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 30 13:53:59.759399 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 30 13:53:59.759452 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 30 13:53:59.759505 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 30 13:53:59.759555 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 30 13:53:59.759606 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 30 13:53:59.759659 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 30 13:53:59.760731 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 30 13:53:59.760794 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 30 13:53:59.760847 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 30 13:53:59.760901 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 30 13:53:59.760957 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 30 13:53:59.761008 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 30 13:53:59.761062 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 30 13:53:59.761113 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 30 13:53:59.761163 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 30 13:53:59.761217 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 30 13:53:59.761268 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 30 13:53:59.761319 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 30 13:53:59.761375 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 30 13:53:59.761426 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 30 13:53:59.761476 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 30 13:53:59.761528 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 30 13:53:59.761578 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 30 13:53:59.761629 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 30 13:53:59.761681 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 30 13:53:59.762796 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 30 13:53:59.762868 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 30 13:53:59.762925 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 30 13:53:59.762979 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 30 13:53:59.763030 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 30 13:53:59.763081 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 30 13:53:59.763132 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 30 13:53:59.763186 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 30 13:53:59.763242 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 30 13:53:59.763293 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 30 13:53:59.763351 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 30 13:53:59.763405 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 30 13:53:59.763456 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 30 13:53:59.763507 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 30 13:53:59.763560 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 30 13:53:59.763611 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 30 13:53:59.763667 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 30 13:53:59.763816 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 30 13:53:59.763873 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 30 13:53:59.763924 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 30 13:53:59.763979 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 30 13:53:59.764030 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 30 13:53:59.764081 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 30 13:53:59.764134 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 30 13:53:59.764190 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 30 13:53:59.764240 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 30 13:53:59.764294 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 30 13:53:59.764346 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 30 13:53:59.764397 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 30 13:53:59.764448 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 30 13:53:59.764502 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 30 13:53:59.764554 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 30 13:53:59.764608 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 30 13:53:59.764659 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 30 13:53:59.764767 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 30 13:53:59.764823 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 30 13:53:59.764873 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 30 13:53:59.764926 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 30 13:53:59.764977 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 30 13:53:59.765027 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 30 13:53:59.765084 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 30 13:53:59.765135 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 30 13:53:59.765185 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 30 13:53:59.765244 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 30 13:53:59.765295 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 30 13:53:59.765346 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 30 13:53:59.765399 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 30 13:53:59.765451 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 30 13:53:59.765504 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 30 13:53:59.765557 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 30 13:53:59.765608 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 30 13:53:59.765658 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 30 13:53:59.765667 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 30 13:53:59.765673 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 30 13:53:59.765680 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 30 13:53:59.765686 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 30 13:53:59.765694 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 30 13:53:59.765700 kernel: iommu: Default domain type: Translated Jan 30 13:53:59.765713 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 30 13:53:59.765721 kernel: PCI: Using ACPI for IRQ routing Jan 30 13:53:59.765728 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 30 13:53:59.765734 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 30 13:53:59.765739 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 30 13:53:59.765796 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 30 13:53:59.765850 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 30 13:53:59.765905 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 30 13:53:59.765914 kernel: vgaarb: loaded Jan 30 13:53:59.765921 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 30 13:53:59.765927 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 30 13:53:59.765933 kernel: clocksource: Switched to clocksource tsc-early Jan 30 13:53:59.765939 kernel: VFS: Disk quotas dquot_6.6.0 Jan 30 13:53:59.765946 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 30 13:53:59.765952 kernel: pnp: PnP ACPI init Jan 30 13:53:59.766007 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 30 13:53:59.766076 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 30 13:53:59.766124 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 30 13:53:59.766175 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 30 13:53:59.766226 kernel: pnp 00:06: [dma 2] Jan 30 13:53:59.766276 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 30 13:53:59.766323 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 30 13:53:59.766370 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 30 13:53:59.766379 kernel: pnp: PnP ACPI: found 8 devices Jan 30 13:53:59.766385 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 30 13:53:59.766391 kernel: NET: Registered PF_INET protocol family Jan 30 13:53:59.766397 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 30 13:53:59.766403 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 30 13:53:59.766409 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 30 13:53:59.766416 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 30 13:53:59.766422 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 30 13:53:59.766430 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 30 13:53:59.766435 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:53:59.766441 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 30 13:53:59.766447 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 30 13:53:59.766454 kernel: NET: Registered PF_XDP protocol family Jan 30 13:53:59.766508 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 30 13:53:59.766562 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 30 13:53:59.766619 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 30 13:53:59.766673 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 30 13:53:59.766838 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 30 13:53:59.766893 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 30 13:53:59.766946 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 30 13:53:59.766999 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 30 13:53:59.767054 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 30 13:53:59.767106 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 30 13:53:59.767159 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 30 13:53:59.767210 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 30 13:53:59.767262 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 30 13:53:59.767315 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 30 13:53:59.767370 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 30 13:53:59.767422 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 30 13:53:59.767473 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 30 13:53:59.767540 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 30 13:53:59.767593 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 30 13:53:59.767645 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 30 13:53:59.767700 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 30 13:53:59.767768 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 30 13:53:59.767820 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 30 13:53:59.767871 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 30 13:53:59.767922 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 30 13:53:59.767973 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768026 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768077 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768129 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768180 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768241 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768293 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768343 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768394 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768448 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768499 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768549 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768601 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768651 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768702 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768792 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768842 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768896 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.768947 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.768996 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769047 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769097 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769148 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769198 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769250 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769300 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769355 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769407 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769459 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769509 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769562 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769613 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769665 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769758 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769816 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769868 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.769919 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.769970 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770021 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770071 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770122 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770173 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770231 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770282 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770333 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770383 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770434 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770485 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770535 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770585 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770636 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770696 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770782 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770833 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770883 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.770934 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.770984 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771033 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771083 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771133 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771186 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771241 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771291 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771341 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771392 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771442 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771492 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771542 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771593 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771644 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771698 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771770 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771822 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771873 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.771923 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.771973 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772023 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.772073 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772125 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.772178 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772229 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.772280 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772332 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 30 13:53:59.772383 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 30 13:53:59.772436 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 30 13:53:59.772488 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 30 13:53:59.772539 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 30 13:53:59.772590 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 30 13:53:59.772640 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 30 13:53:59.772699 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 30 13:53:59.772819 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 30 13:53:59.772871 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 30 13:53:59.772922 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 30 13:53:59.772972 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 30 13:53:59.773025 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 30 13:53:59.773075 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 30 13:53:59.773126 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 30 13:53:59.773179 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 30 13:53:59.773232 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 30 13:53:59.773282 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 30 13:53:59.773333 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 30 13:53:59.773382 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 30 13:53:59.773434 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 30 13:53:59.773484 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 30 13:53:59.773534 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 30 13:53:59.773584 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 30 13:53:59.773637 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 30 13:53:59.773687 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 30 13:53:59.773761 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 30 13:53:59.773814 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 30 13:53:59.773864 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 30 13:53:59.773915 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 30 13:53:59.773969 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 30 13:53:59.774020 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 30 13:53:59.774072 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 30 13:53:59.774122 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 30 13:53:59.774173 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 30 13:53:59.774232 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 30 13:53:59.774285 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 30 13:53:59.774335 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 30 13:53:59.774386 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 30 13:53:59.774439 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 30 13:53:59.774491 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 30 13:53:59.774541 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 30 13:53:59.774591 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 30 13:53:59.774642 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 30 13:53:59.774693 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 30 13:53:59.774865 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 30 13:53:59.774916 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 30 13:53:59.774966 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 30 13:53:59.775017 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 30 13:53:59.775071 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 30 13:53:59.775120 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 30 13:53:59.775171 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 30 13:53:59.775221 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 30 13:53:59.775270 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 30 13:53:59.775320 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 30 13:53:59.775370 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 30 13:53:59.775419 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 30 13:53:59.775470 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 30 13:53:59.775522 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 30 13:53:59.775571 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 30 13:53:59.775621 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 30 13:53:59.775672 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 30 13:53:59.775730 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 30 13:53:59.775783 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 30 13:53:59.775833 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 30 13:53:59.775883 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 30 13:53:59.775933 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 30 13:53:59.775984 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 30 13:53:59.776037 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 30 13:53:59.776087 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 30 13:53:59.776137 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 30 13:53:59.776189 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 30 13:53:59.776239 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 30 13:53:59.776288 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 30 13:53:59.776337 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 30 13:53:59.776388 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 30 13:53:59.776438 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 30 13:53:59.776490 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 30 13:53:59.776541 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 30 13:53:59.776591 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 30 13:53:59.776641 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 30 13:53:59.776692 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 30 13:53:59.776778 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 30 13:53:59.776830 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 30 13:53:59.776882 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 30 13:53:59.776933 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 30 13:53:59.776982 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 30 13:53:59.777037 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 30 13:53:59.777088 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 30 13:53:59.777137 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 30 13:53:59.777190 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 30 13:53:59.777245 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 30 13:53:59.777296 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 30 13:53:59.777347 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 30 13:53:59.777400 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 30 13:53:59.777451 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 30 13:53:59.777505 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 30 13:53:59.777555 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 30 13:53:59.777607 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 30 13:53:59.777657 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 30 13:53:59.777714 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 30 13:53:59.777769 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 30 13:53:59.777820 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 30 13:53:59.777870 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 30 13:53:59.777921 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 30 13:53:59.777971 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 30 13:53:59.778025 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 30 13:53:59.778076 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 30 13:53:59.778126 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 30 13:53:59.778176 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 30 13:53:59.778238 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 30 13:53:59.778291 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 30 13:53:59.778342 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 30 13:53:59.778394 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 30 13:53:59.778445 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 30 13:53:59.778499 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 30 13:53:59.778549 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 30 13:53:59.778596 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 30 13:53:59.778641 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 30 13:53:59.778686 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 30 13:53:59.778834 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 30 13:53:59.778886 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 30 13:53:59.778933 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 30 13:53:59.778983 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 30 13:53:59.779029 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 30 13:53:59.779074 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 30 13:53:59.779120 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 30 13:53:59.779166 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 30 13:53:59.779211 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 30 13:53:59.779285 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 30 13:53:59.779340 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 30 13:53:59.779387 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 30 13:53:59.779438 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 30 13:53:59.779485 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 30 13:53:59.779531 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 30 13:53:59.779580 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 30 13:53:59.779627 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 30 13:53:59.779676 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 30 13:53:59.779735 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 30 13:53:59.779783 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 30 13:53:59.779834 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 30 13:53:59.779880 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 30 13:53:59.779931 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 30 13:53:59.779981 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 30 13:53:59.780032 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 30 13:53:59.780079 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 30 13:53:59.780133 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 30 13:53:59.780188 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 30 13:53:59.780241 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 30 13:53:59.780290 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 30 13:53:59.780336 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 30 13:53:59.780403 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 30 13:53:59.780450 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 30 13:53:59.780497 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 30 13:53:59.780547 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 30 13:53:59.780595 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 30 13:53:59.780648 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 30 13:53:59.780699 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 30 13:53:59.782225 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 30 13:53:59.782282 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 30 13:53:59.782331 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 30 13:53:59.782382 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 30 13:53:59.782432 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 30 13:53:59.782483 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 30 13:53:59.782529 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 30 13:53:59.782579 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 30 13:53:59.782627 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 30 13:53:59.782677 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 30 13:53:59.782743 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 30 13:53:59.782793 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 30 13:53:59.782862 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 30 13:53:59.782911 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 30 13:53:59.782958 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 30 13:53:59.783010 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 30 13:53:59.783057 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 30 13:53:59.783107 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 30 13:53:59.783158 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 30 13:53:59.783205 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 30 13:53:59.783256 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 30 13:53:59.783303 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 30 13:53:59.783353 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 30 13:53:59.783404 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 30 13:53:59.783455 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 30 13:53:59.783502 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 30 13:53:59.783552 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 30 13:53:59.783599 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 30 13:53:59.783655 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 30 13:53:59.783705 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 30 13:53:59.783761 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 30 13:53:59.783827 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 30 13:53:59.783876 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 30 13:53:59.783922 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 30 13:53:59.783973 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 30 13:53:59.784023 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 30 13:53:59.784074 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 30 13:53:59.784122 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 30 13:53:59.784173 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 30 13:53:59.784221 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 30 13:53:59.784272 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 30 13:53:59.784319 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 30 13:53:59.784374 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 30 13:53:59.784421 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 30 13:53:59.784472 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 30 13:53:59.784519 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 30 13:53:59.784576 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 30 13:53:59.784586 kernel: PCI: CLS 32 bytes, default 64 Jan 30 13:53:59.784594 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 30 13:53:59.784601 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 30 13:53:59.784607 kernel: clocksource: Switched to clocksource tsc Jan 30 13:53:59.784613 kernel: Initialise system trusted keyrings Jan 30 13:53:59.784620 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 30 13:53:59.784626 kernel: Key type asymmetric registered Jan 30 13:53:59.784633 kernel: Asymmetric key parser 'x509' registered Jan 30 13:53:59.784639 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 30 13:53:59.784646 kernel: io scheduler mq-deadline registered Jan 30 13:53:59.784653 kernel: io scheduler kyber registered Jan 30 13:53:59.784659 kernel: io scheduler bfq registered Jan 30 13:53:59.785074 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 30 13:53:59.785140 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.785196 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 30 13:53:59.785250 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.785635 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 30 13:53:59.785694 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786136 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 30 13:53:59.786196 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786263 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 30 13:53:59.786317 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786369 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 30 13:53:59.786420 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786477 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 30 13:53:59.786528 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786581 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 30 13:53:59.786632 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786684 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 30 13:53:59.786756 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786810 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 30 13:53:59.786861 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.786914 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 30 13:53:59.786966 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787019 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 30 13:53:59.787071 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787129 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 30 13:53:59.787180 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787232 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 30 13:53:59.787284 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787337 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 30 13:53:59.787391 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787445 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 30 13:53:59.787498 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787551 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 30 13:53:59.787603 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.787655 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 30 13:53:59.789117 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789187 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 30 13:53:59.789245 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789300 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 30 13:53:59.789354 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789408 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 30 13:53:59.789464 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789517 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 30 13:53:59.789568 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789622 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 30 13:53:59.789674 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.789986 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 30 13:53:59.790048 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790102 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 30 13:53:59.790154 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790207 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 30 13:53:59.790257 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790310 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 30 13:53:59.790365 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790416 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 30 13:53:59.790467 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790518 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 30 13:53:59.790570 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790625 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 30 13:53:59.790677 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790781 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 30 13:53:59.790834 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790887 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 30 13:53:59.790938 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 30 13:53:59.790950 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 30 13:53:59.790957 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 30 13:53:59.790963 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 30 13:53:59.790970 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 30 13:53:59.790977 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 30 13:53:59.790984 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 30 13:53:59.791036 kernel: rtc_cmos 00:01: registered as rtc0 Jan 30 13:53:59.791089 kernel: rtc_cmos 00:01: setting system clock to 2025-01-30T13:53:59 UTC (1738245239) Jan 30 13:53:59.791136 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 30 13:53:59.791145 kernel: intel_pstate: CPU model not supported Jan 30 13:53:59.791151 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 30 13:53:59.791158 kernel: NET: Registered PF_INET6 protocol family Jan 30 13:53:59.791164 kernel: Segment Routing with IPv6 Jan 30 13:53:59.791171 kernel: In-situ OAM (IOAM) with IPv6 Jan 30 13:53:59.791177 kernel: NET: Registered PF_PACKET protocol family Jan 30 13:53:59.791183 kernel: Key type dns_resolver registered Jan 30 13:53:59.791192 kernel: IPI shorthand broadcast: enabled Jan 30 13:53:59.791198 kernel: sched_clock: Marking stable (926108151, 227436044)->(1216052068, -62507873) Jan 30 13:53:59.791209 kernel: registered taskstats version 1 Jan 30 13:53:59.791215 kernel: Loading compiled-in X.509 certificates Jan 30 13:53:59.791222 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 1efdcbe72fc44d29e4e6411cf9a3e64046be4375' Jan 30 13:53:59.791228 kernel: Key type .fscrypt registered Jan 30 13:53:59.791234 kernel: Key type fscrypt-provisioning registered Jan 30 13:53:59.791240 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 30 13:53:59.791247 kernel: ima: Allocated hash algorithm: sha1 Jan 30 13:53:59.791254 kernel: ima: No architecture policies found Jan 30 13:53:59.791261 kernel: clk: Disabling unused clocks Jan 30 13:53:59.791267 kernel: Freeing unused kernel image (initmem) memory: 42844K Jan 30 13:53:59.791274 kernel: Write protecting the kernel read-only data: 36864k Jan 30 13:53:59.791280 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Jan 30 13:53:59.791287 kernel: Run /init as init process Jan 30 13:53:59.791293 kernel: with arguments: Jan 30 13:53:59.791299 kernel: /init Jan 30 13:53:59.791306 kernel: with environment: Jan 30 13:53:59.791313 kernel: HOME=/ Jan 30 13:53:59.791319 kernel: TERM=linux Jan 30 13:53:59.791325 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 30 13:53:59.791333 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:53:59.791341 systemd[1]: Detected virtualization vmware. Jan 30 13:53:59.791348 systemd[1]: Detected architecture x86-64. Jan 30 13:53:59.791354 systemd[1]: Running in initrd. Jan 30 13:53:59.791360 systemd[1]: No hostname configured, using default hostname. Jan 30 13:53:59.791368 systemd[1]: Hostname set to . Jan 30 13:53:59.791375 systemd[1]: Initializing machine ID from random generator. Jan 30 13:53:59.791381 systemd[1]: Queued start job for default target initrd.target. Jan 30 13:53:59.791387 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:53:59.791394 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:53:59.791401 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 30 13:53:59.791408 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:53:59.791415 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 30 13:53:59.791423 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 30 13:53:59.791430 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 30 13:53:59.791437 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 30 13:53:59.791444 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:53:59.791450 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:53:59.791457 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:53:59.791464 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:53:59.791471 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:53:59.791477 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:53:59.791484 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:53:59.791490 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:53:59.791497 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 30 13:53:59.791504 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 30 13:53:59.791510 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:53:59.791517 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:53:59.791524 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:53:59.791531 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:53:59.791538 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 30 13:53:59.791545 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:53:59.791551 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 30 13:53:59.791558 systemd[1]: Starting systemd-fsck-usr.service... Jan 30 13:53:59.791564 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:53:59.791571 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:53:59.791577 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:53:59.791585 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 30 13:53:59.791592 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:53:59.791611 systemd-journald[214]: Collecting audit messages is disabled. Jan 30 13:53:59.791627 systemd[1]: Finished systemd-fsck-usr.service. Jan 30 13:53:59.791636 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:53:59.791643 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 30 13:53:59.791650 kernel: Bridge firewalling registered Jan 30 13:53:59.791656 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:53:59.791664 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:53:59.791671 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:53:59.791677 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:53:59.791684 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:53:59.791690 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:53:59.791698 systemd-journald[214]: Journal started Jan 30 13:53:59.791720 systemd-journald[214]: Runtime Journal (/run/log/journal/b6863ceb212a40e38a912c9136e96819) is 4.8M, max 38.6M, 33.8M free. Jan 30 13:53:59.741314 systemd-modules-load[215]: Inserted module 'overlay' Jan 30 13:53:59.761726 systemd-modules-load[215]: Inserted module 'br_netfilter' Jan 30 13:53:59.794270 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:53:59.793908 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:53:59.794880 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:53:59.795372 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:53:59.798790 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 30 13:53:59.799822 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:53:59.806391 dracut-cmdline[245]: dracut-dracut-053 Jan 30 13:53:59.805757 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:53:59.808754 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=befc9792b021bef43c896e00e1d5172b6224dbafc9b6c92b267e5e544378e681 Jan 30 13:53:59.811846 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:53:59.828865 systemd-resolved[257]: Positive Trust Anchors: Jan 30 13:53:59.828874 systemd-resolved[257]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:53:59.828895 systemd-resolved[257]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:53:59.830519 systemd-resolved[257]: Defaulting to hostname 'linux'. Jan 30 13:53:59.832268 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:53:59.832442 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:53:59.855729 kernel: SCSI subsystem initialized Jan 30 13:53:59.861719 kernel: Loading iSCSI transport class v2.0-870. Jan 30 13:53:59.868721 kernel: iscsi: registered transport (tcp) Jan 30 13:53:59.880722 kernel: iscsi: registered transport (qla4xxx) Jan 30 13:53:59.880755 kernel: QLogic iSCSI HBA Driver Jan 30 13:53:59.901472 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 30 13:53:59.905819 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 30 13:53:59.920328 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 30 13:53:59.920356 kernel: device-mapper: uevent: version 1.0.3 Jan 30 13:53:59.921383 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 30 13:53:59.951755 kernel: raid6: avx2x4 gen() 52167 MB/s Jan 30 13:53:59.968733 kernel: raid6: avx2x2 gen() 52299 MB/s Jan 30 13:53:59.986025 kernel: raid6: avx2x1 gen() 44093 MB/s Jan 30 13:53:59.986086 kernel: raid6: using algorithm avx2x2 gen() 52299 MB/s Jan 30 13:54:00.003933 kernel: raid6: .... xor() 30358 MB/s, rmw enabled Jan 30 13:54:00.003983 kernel: raid6: using avx2x2 recovery algorithm Jan 30 13:54:00.017721 kernel: xor: automatically using best checksumming function avx Jan 30 13:54:00.119739 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 30 13:54:00.125252 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:00.128827 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:00.136865 systemd-udevd[433]: Using default interface naming scheme 'v255'. Jan 30 13:54:00.139391 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:00.152876 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 30 13:54:00.160295 dracut-pre-trigger[438]: rd.md=0: removing MD RAID activation Jan 30 13:54:00.177602 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:00.181814 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:54:00.251490 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:00.257083 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 30 13:54:00.268768 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:00.269952 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:00.270513 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:00.270886 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:54:00.275829 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 30 13:54:00.282982 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:00.316718 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 30 13:54:00.327763 kernel: vmw_pvscsi: using 64bit dma Jan 30 13:54:00.330419 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 30 13:54:00.330439 kernel: vmw_pvscsi: max_id: 16 Jan 30 13:54:00.330447 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 30 13:54:00.331670 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 30 13:54:00.340632 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 30 13:54:00.340642 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 30 13:54:00.340650 kernel: vmw_pvscsi: using MSI-X Jan 30 13:54:00.340660 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 30 13:54:00.340778 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 30 13:54:00.340852 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 30 13:54:00.352580 kernel: libata version 3.00 loaded. Jan 30 13:54:00.352591 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 30 13:54:00.352665 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 30 13:54:00.356518 kernel: cryptd: max_cpu_qlen set to 1000 Jan 30 13:54:00.357666 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 30 13:54:00.357775 kernel: scsi host1: ata_piix Jan 30 13:54:00.357849 kernel: scsi host2: ata_piix Jan 30 13:54:00.357909 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 30 13:54:00.357918 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 30 13:54:00.359454 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:00.359524 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:00.359806 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:00.359901 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:00.359989 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:00.360093 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:00.365033 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:00.375487 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:00.379796 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 30 13:54:00.390102 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:00.524724 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 30 13:54:00.529730 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 30 13:54:00.537732 kernel: AVX2 version of gcm_enc/dec engaged. Jan 30 13:54:00.537758 kernel: AES CTR mode by8 optimization enabled Jan 30 13:54:00.549355 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 30 13:54:00.554115 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 30 13:54:00.554190 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 30 13:54:00.554253 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 30 13:54:00.554312 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 30 13:54:00.554371 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:00.554380 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 30 13:54:00.557194 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 30 13:54:00.572838 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 30 13:54:00.572858 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 30 13:54:00.602769 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (487) Jan 30 13:54:00.603762 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 30 13:54:00.610730 kernel: BTRFS: device fsid 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (479) Jan 30 13:54:00.608916 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 30 13:54:00.611656 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 30 13:54:00.614153 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 30 13:54:00.614423 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 30 13:54:00.619880 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 30 13:54:00.649065 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:00.654340 kernel: GPT:disk_guids don't match. Jan 30 13:54:00.654372 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 30 13:54:00.654384 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:01.658687 disk-uuid[588]: The operation has completed successfully. Jan 30 13:54:01.658918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 30 13:54:01.717218 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 30 13:54:01.717277 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 30 13:54:01.721898 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 30 13:54:01.724032 sh[608]: Success Jan 30 13:54:01.732728 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 30 13:54:01.770183 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 30 13:54:01.774784 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 30 13:54:01.775856 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 30 13:54:01.816112 kernel: BTRFS info (device dm-0): first mount of filesystem 64bb5b5a-85cc-41cc-a02b-2cfaa3e93b0a Jan 30 13:54:01.816162 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:01.816176 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 30 13:54:01.817363 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 30 13:54:01.818281 kernel: BTRFS info (device dm-0): using free space tree Jan 30 13:54:01.826735 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 30 13:54:01.828787 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 30 13:54:01.838901 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 30 13:54:01.840475 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 30 13:54:01.864987 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:01.865032 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:01.865043 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:01.894729 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:01.903773 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 30 13:54:01.904742 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:01.907999 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 30 13:54:01.913928 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 30 13:54:01.951672 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 30 13:54:01.955824 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 30 13:54:02.013280 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:02.018830 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:54:02.024726 ignition[669]: Ignition 2.19.0 Jan 30 13:54:02.024735 ignition[669]: Stage: fetch-offline Jan 30 13:54:02.024768 ignition[669]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.024777 ignition[669]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.024835 ignition[669]: parsed url from cmdline: "" Jan 30 13:54:02.024837 ignition[669]: no config URL provided Jan 30 13:54:02.024840 ignition[669]: reading system config file "/usr/lib/ignition/user.ign" Jan 30 13:54:02.024845 ignition[669]: no config at "/usr/lib/ignition/user.ign" Jan 30 13:54:02.025214 ignition[669]: config successfully fetched Jan 30 13:54:02.025235 ignition[669]: parsing config with SHA512: 4af82fa843ada1bd79c80ead3cd5ad6f1b880800c9e1f6be5d89f2bce69dc9e350aba162a2bb03cfe5607c48e84446756e836545ad771f530b3c915107bbc6ed Jan 30 13:54:02.029041 unknown[669]: fetched base config from "system" Jan 30 13:54:02.029285 ignition[669]: fetch-offline: fetch-offline passed Jan 30 13:54:02.029048 unknown[669]: fetched user config from "vmware" Jan 30 13:54:02.029321 ignition[669]: Ignition finished successfully Jan 30 13:54:02.030908 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:02.035317 systemd-networkd[802]: lo: Link UP Jan 30 13:54:02.035324 systemd-networkd[802]: lo: Gained carrier Jan 30 13:54:02.036059 systemd-networkd[802]: Enumeration completed Jan 30 13:54:02.036334 systemd-networkd[802]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 30 13:54:02.036517 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:54:02.036675 systemd[1]: Reached target network.target - Network. Jan 30 13:54:02.036944 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 30 13:54:02.040353 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 30 13:54:02.040496 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 30 13:54:02.041060 systemd-networkd[802]: ens192: Link UP Jan 30 13:54:02.041065 systemd-networkd[802]: ens192: Gained carrier Jan 30 13:54:02.046791 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 30 13:54:02.055035 ignition[805]: Ignition 2.19.0 Jan 30 13:54:02.055041 ignition[805]: Stage: kargs Jan 30 13:54:02.055411 ignition[805]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.055420 ignition[805]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.056027 ignition[805]: kargs: kargs passed Jan 30 13:54:02.056053 ignition[805]: Ignition finished successfully Jan 30 13:54:02.057256 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 30 13:54:02.061863 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 30 13:54:02.069885 ignition[812]: Ignition 2.19.0 Jan 30 13:54:02.069892 ignition[812]: Stage: disks Jan 30 13:54:02.069994 ignition[812]: no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.070000 ignition[812]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.070554 ignition[812]: disks: disks passed Jan 30 13:54:02.070579 ignition[812]: Ignition finished successfully Jan 30 13:54:02.071316 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 30 13:54:02.071839 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:02.072099 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 30 13:54:02.072336 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:54:02.072558 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:54:02.072775 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:54:02.076790 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 30 13:54:02.090166 systemd-fsck[820]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 30 13:54:02.091686 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 30 13:54:02.096803 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 30 13:54:02.160328 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 30 13:54:02.160776 kernel: EXT4-fs (sda9): mounted filesystem 9f41abed-fd12-4e57-bcd4-5c0ef7f8a1bf r/w with ordered data mode. Quota mode: none. Jan 30 13:54:02.160722 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 30 13:54:02.165777 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:02.167310 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 30 13:54:02.167748 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 30 13:54:02.167789 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 30 13:54:02.167807 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:02.171685 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 30 13:54:02.172908 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 30 13:54:02.175847 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by mount (828) Jan 30 13:54:02.178992 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:02.179023 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:02.179032 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:02.183734 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:02.185611 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:02.262076 initrd-setup-root[852]: cut: /sysroot/etc/passwd: No such file or directory Jan 30 13:54:02.265046 initrd-setup-root[859]: cut: /sysroot/etc/group: No such file or directory Jan 30 13:54:02.267519 initrd-setup-root[866]: cut: /sysroot/etc/shadow: No such file or directory Jan 30 13:54:02.270285 initrd-setup-root[873]: cut: /sysroot/etc/gshadow: No such file or directory Jan 30 13:54:02.366012 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:02.370809 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 30 13:54:02.373504 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 30 13:54:02.378724 kernel: BTRFS info (device sda6): last unmount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:02.396653 ignition[940]: INFO : Ignition 2.19.0 Jan 30 13:54:02.396653 ignition[940]: INFO : Stage: mount Jan 30 13:54:02.397036 ignition[940]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.397036 ignition[940]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.397299 ignition[940]: INFO : mount: mount passed Jan 30 13:54:02.397434 ignition[940]: INFO : Ignition finished successfully Jan 30 13:54:02.398004 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 30 13:54:02.402829 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 30 13:54:02.424800 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 30 13:54:02.814269 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 30 13:54:02.819865 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 30 13:54:02.830727 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (952) Jan 30 13:54:02.842467 kernel: BTRFS info (device sda6): first mount of filesystem aa75aabd-8755-4402-b4b6-23093345fe03 Jan 30 13:54:02.842507 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 30 13:54:02.842516 kernel: BTRFS info (device sda6): using free space tree Jan 30 13:54:02.859722 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 30 13:54:02.860235 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 30 13:54:02.873640 ignition[969]: INFO : Ignition 2.19.0 Jan 30 13:54:02.873640 ignition[969]: INFO : Stage: files Jan 30 13:54:02.874021 ignition[969]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:02.874021 ignition[969]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:02.874431 ignition[969]: DEBUG : files: compiled without relabeling support, skipping Jan 30 13:54:02.875156 ignition[969]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 30 13:54:02.875156 ignition[969]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 30 13:54:02.877504 ignition[969]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 30 13:54:02.877676 ignition[969]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 30 13:54:02.877819 ignition[969]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 30 13:54:02.877790 unknown[969]: wrote ssh authorized keys file for user: core Jan 30 13:54:02.879522 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:54:02.879821 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jan 30 13:54:03.445979 systemd-networkd[802]: ens192: Gained IPv6LL Jan 30 13:54:07.915217 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 30 13:54:08.008362 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jan 30 13:54:08.008362 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 30 13:54:08.008778 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:08.009973 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jan 30 13:54:08.503412 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 30 13:54:08.699394 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jan 30 13:54:08.699756 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 30 13:54:08.699756 ignition[969]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 30 13:54:08.699756 ignition[969]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 30 13:54:08.708929 ignition[969]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 30 13:54:08.709158 ignition[969]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 30 13:54:09.007268 ignition[969]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 30 13:54:09.010361 ignition[969]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:09.010361 ignition[969]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 30 13:54:09.010361 ignition[969]: INFO : files: files passed Jan 30 13:54:09.010361 ignition[969]: INFO : Ignition finished successfully Jan 30 13:54:09.011660 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 30 13:54:09.015850 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 30 13:54:09.017152 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 30 13:54:09.022172 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 30 13:54:09.022382 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 30 13:54:09.025174 initrd-setup-root-after-ignition[999]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:09.025548 initrd-setup-root-after-ignition[999]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:09.026327 initrd-setup-root-after-ignition[1003]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 30 13:54:09.027177 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:09.027683 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 30 13:54:09.039929 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 30 13:54:09.052870 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 30 13:54:09.052934 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 30 13:54:09.053394 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 30 13:54:09.053500 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 30 13:54:09.053760 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 30 13:54:09.054227 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 30 13:54:09.065561 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:09.070858 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 30 13:54:09.076803 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:09.077099 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:09.077268 systemd[1]: Stopped target timers.target - Timer Units. Jan 30 13:54:09.077397 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 30 13:54:09.077473 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 30 13:54:09.077703 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 30 13:54:09.077925 systemd[1]: Stopped target basic.target - Basic System. Jan 30 13:54:09.078100 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 30 13:54:09.078313 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 30 13:54:09.078526 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 30 13:54:09.078738 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 30 13:54:09.078958 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 30 13:54:09.079331 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 30 13:54:09.079509 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 30 13:54:09.079701 systemd[1]: Stopped target swap.target - Swaps. Jan 30 13:54:09.079936 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 30 13:54:09.080008 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 30 13:54:09.080291 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:09.080540 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:09.080748 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 30 13:54:09.080796 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:09.080941 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 30 13:54:09.081005 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 30 13:54:09.081251 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 30 13:54:09.081314 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 30 13:54:09.081569 systemd[1]: Stopped target paths.target - Path Units. Jan 30 13:54:09.081688 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 30 13:54:09.086775 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:09.086955 systemd[1]: Stopped target slices.target - Slice Units. Jan 30 13:54:09.087152 systemd[1]: Stopped target sockets.target - Socket Units. Jan 30 13:54:09.087336 systemd[1]: iscsid.socket: Deactivated successfully. Jan 30 13:54:09.087405 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 30 13:54:09.087614 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 30 13:54:09.087658 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 30 13:54:09.087930 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 30 13:54:09.088013 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 30 13:54:09.088283 systemd[1]: ignition-files.service: Deactivated successfully. Jan 30 13:54:09.088359 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 30 13:54:09.092881 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 30 13:54:09.094892 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 30 13:54:09.095154 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 30 13:54:09.095279 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:09.095538 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 30 13:54:09.095596 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 30 13:54:09.098538 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 30 13:54:09.098602 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 30 13:54:09.102820 ignition[1024]: INFO : Ignition 2.19.0 Jan 30 13:54:09.105383 ignition[1024]: INFO : Stage: umount Jan 30 13:54:09.105383 ignition[1024]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 30 13:54:09.105383 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 30 13:54:09.105383 ignition[1024]: INFO : umount: umount passed Jan 30 13:54:09.105383 ignition[1024]: INFO : Ignition finished successfully Jan 30 13:54:09.104255 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 30 13:54:09.104323 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 30 13:54:09.104729 systemd[1]: Stopped target network.target - Network. Jan 30 13:54:09.104912 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 30 13:54:09.104943 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 30 13:54:09.105115 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 30 13:54:09.105145 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 30 13:54:09.105373 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 30 13:54:09.105445 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 30 13:54:09.105615 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 30 13:54:09.105673 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 30 13:54:09.106050 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 30 13:54:09.106377 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 30 13:54:09.110016 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 30 13:54:09.110091 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 30 13:54:09.110745 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 30 13:54:09.110768 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:09.113819 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 30 13:54:09.114048 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 30 13:54:09.114084 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 30 13:54:09.114338 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 30 13:54:09.114360 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 30 13:54:09.115356 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:09.117109 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 30 13:54:09.117329 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 30 13:54:09.119145 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 30 13:54:09.119468 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:09.120077 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 30 13:54:09.120326 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:09.120591 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 30 13:54:09.120761 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:09.120979 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 30 13:54:09.121002 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 30 13:54:09.121414 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 30 13:54:09.121437 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 30 13:54:09.121698 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 30 13:54:09.121728 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 30 13:54:09.128816 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 30 13:54:09.129072 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 30 13:54:09.129104 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:09.129364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 30 13:54:09.129386 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:09.129635 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 30 13:54:09.129657 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:09.130042 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 30 13:54:09.130064 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:54:09.130605 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 30 13:54:09.130627 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:09.130891 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 30 13:54:09.130912 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:09.131188 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 30 13:54:09.131216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:09.136057 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 30 13:54:09.137322 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 30 13:54:09.137532 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 30 13:54:09.138254 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 30 13:54:09.138449 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 30 13:54:09.369864 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 30 13:54:09.369928 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 30 13:54:09.370400 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 30 13:54:09.370532 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 30 13:54:09.370562 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 30 13:54:09.373818 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 30 13:54:09.390409 systemd[1]: Switching root. Jan 30 13:54:09.429557 systemd-journald[214]: Journal stopped Jan 30 13:54:10.623684 systemd-journald[214]: Received SIGTERM from PID 1 (systemd). Jan 30 13:54:10.623705 kernel: SELinux: policy capability network_peer_controls=1 Jan 30 13:54:10.623720 kernel: SELinux: policy capability open_perms=1 Jan 30 13:54:10.623726 kernel: SELinux: policy capability extended_socket_class=1 Jan 30 13:54:10.623742 kernel: SELinux: policy capability always_check_network=0 Jan 30 13:54:10.623749 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 30 13:54:10.623757 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 30 13:54:10.623763 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 30 13:54:10.623768 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 30 13:54:10.623774 kernel: audit: type=1403 audit(1738245249.864:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 30 13:54:10.623780 systemd[1]: Successfully loaded SELinux policy in 31.500ms. Jan 30 13:54:10.623787 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 6.990ms. Jan 30 13:54:10.623794 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 30 13:54:10.623802 systemd[1]: Detected virtualization vmware. Jan 30 13:54:10.623809 systemd[1]: Detected architecture x86-64. Jan 30 13:54:10.623815 systemd[1]: Detected first boot. Jan 30 13:54:10.623822 systemd[1]: Initializing machine ID from random generator. Jan 30 13:54:10.623829 zram_generator::config[1066]: No configuration found. Jan 30 13:54:10.623838 systemd[1]: Populated /etc with preset unit settings. Jan 30 13:54:10.623846 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 30 13:54:10.623853 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jan 30 13:54:10.623860 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 30 13:54:10.623866 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 30 13:54:10.623873 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 30 13:54:10.623880 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 30 13:54:10.623888 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 30 13:54:10.623895 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 30 13:54:10.623902 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 30 13:54:10.623908 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 30 13:54:10.623915 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 30 13:54:10.623921 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 30 13:54:10.623929 systemd[1]: Created slice user.slice - User and Session Slice. Jan 30 13:54:10.623938 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 30 13:54:10.623945 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 30 13:54:10.623951 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 30 13:54:10.623958 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 30 13:54:10.623964 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 30 13:54:10.623971 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 30 13:54:10.623977 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 30 13:54:10.623986 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 30 13:54:10.623993 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 30 13:54:10.624001 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 30 13:54:10.624008 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 30 13:54:10.624015 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 30 13:54:10.624021 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 30 13:54:10.624028 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 30 13:54:10.624036 systemd[1]: Reached target slices.target - Slice Units. Jan 30 13:54:10.624044 systemd[1]: Reached target swap.target - Swaps. Jan 30 13:54:10.624051 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 30 13:54:10.624057 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 30 13:54:10.624064 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 30 13:54:10.624071 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 30 13:54:10.624079 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 30 13:54:10.624086 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 30 13:54:10.624093 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 30 13:54:10.624100 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 30 13:54:10.624107 systemd[1]: Mounting media.mount - External Media Directory... Jan 30 13:54:10.624114 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:10.624123 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 30 13:54:10.624130 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 30 13:54:10.624138 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 30 13:54:10.624146 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 30 13:54:10.624153 systemd[1]: Reached target machines.target - Containers. Jan 30 13:54:10.624160 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 30 13:54:10.624169 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jan 30 13:54:10.624176 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 30 13:54:10.624183 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 30 13:54:10.624191 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:54:10.624199 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:54:10.624206 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:54:10.624213 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 30 13:54:10.624220 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:54:10.624227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 30 13:54:10.624234 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 30 13:54:10.624241 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 30 13:54:10.624247 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 30 13:54:10.624254 systemd[1]: Stopped systemd-fsck-usr.service. Jan 30 13:54:10.624262 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 30 13:54:10.624269 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 30 13:54:10.624276 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 30 13:54:10.624283 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 30 13:54:10.624290 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 30 13:54:10.624297 systemd[1]: verity-setup.service: Deactivated successfully. Jan 30 13:54:10.624304 systemd[1]: Stopped verity-setup.service. Jan 30 13:54:10.624311 kernel: ACPI: bus type drm_connector registered Jan 30 13:54:10.624319 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:10.624325 kernel: loop: module loaded Jan 30 13:54:10.624332 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 30 13:54:10.624339 kernel: fuse: init (API version 7.39) Jan 30 13:54:10.624357 systemd-journald[1153]: Collecting audit messages is disabled. Jan 30 13:54:10.624375 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 30 13:54:10.624383 systemd-journald[1153]: Journal started Jan 30 13:54:10.624398 systemd-journald[1153]: Runtime Journal (/run/log/journal/56517c1f857641dbad1c2a82ede3b459) is 4.8M, max 38.6M, 33.8M free. Jan 30 13:54:10.627617 systemd[1]: Mounted media.mount - External Media Directory. Jan 30 13:54:10.457643 systemd[1]: Queued start job for default target multi-user.target. Jan 30 13:54:10.476034 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 30 13:54:10.476246 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 30 13:54:10.628809 systemd[1]: Started systemd-journald.service - Journal Service. Jan 30 13:54:10.628892 jq[1133]: true Jan 30 13:54:10.629803 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 30 13:54:10.629963 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 30 13:54:10.630118 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 30 13:54:10.630942 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 30 13:54:10.631894 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 30 13:54:10.631976 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 30 13:54:10.632216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:54:10.632287 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:54:10.632513 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:54:10.632586 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:54:10.633327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:54:10.633404 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:54:10.633844 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 30 13:54:10.633917 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 30 13:54:10.634162 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:54:10.634237 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:54:10.634989 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 30 13:54:10.635229 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 30 13:54:10.635872 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 30 13:54:10.640729 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 30 13:54:10.648143 jq[1172]: true Jan 30 13:54:10.653110 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 30 13:54:10.654243 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 30 13:54:10.657431 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 30 13:54:10.657556 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 30 13:54:10.657575 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 30 13:54:10.658295 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 30 13:54:10.661538 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 30 13:54:10.664756 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 30 13:54:10.664934 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:10.670786 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 30 13:54:10.677073 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 30 13:54:10.677215 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:54:10.680812 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 30 13:54:10.680947 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:54:10.681794 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 30 13:54:10.684803 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 30 13:54:10.689224 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 30 13:54:10.690689 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 30 13:54:10.691908 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 30 13:54:10.692742 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 30 13:54:10.739748 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 30 13:54:10.740167 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 30 13:54:10.746776 kernel: loop0: detected capacity change from 0 to 140768 Jan 30 13:54:10.746830 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 30 13:54:10.754103 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 30 13:54:10.756696 systemd-journald[1153]: Time spent on flushing to /var/log/journal/56517c1f857641dbad1c2a82ede3b459 is 26.018ms for 1842 entries. Jan 30 13:54:10.756696 systemd-journald[1153]: System Journal (/var/log/journal/56517c1f857641dbad1c2a82ede3b459) is 8.0M, max 584.8M, 576.8M free. Jan 30 13:54:10.797928 systemd-journald[1153]: Received client request to flush runtime journal. Jan 30 13:54:10.763984 ignition[1194]: Ignition 2.19.0 Jan 30 13:54:10.779869 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jan 30 13:54:10.764180 ignition[1194]: deleting config from guestinfo properties Jan 30 13:54:10.778844 ignition[1194]: Successfully deleted config Jan 30 13:54:10.800088 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 30 13:54:10.826187 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 30 13:54:10.826199 systemd-tmpfiles[1205]: ACLs are not supported, ignoring. Jan 30 13:54:10.832064 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 30 13:54:10.837813 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 30 13:54:10.838138 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 30 13:54:10.839794 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 30 13:54:10.845410 udevadm[1224]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 30 13:54:10.887380 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 30 13:54:10.888931 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 30 13:54:10.895814 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 30 13:54:10.904438 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 30 13:54:10.912269 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 30 13:54:10.913723 kernel: loop1: detected capacity change from 0 to 210664 Jan 30 13:54:10.924162 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 30 13:54:10.924174 systemd-tmpfiles[1231]: ACLs are not supported, ignoring. Jan 30 13:54:10.926893 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 30 13:54:10.961733 kernel: loop2: detected capacity change from 0 to 142488 Jan 30 13:54:11.024965 kernel: loop3: detected capacity change from 0 to 2976 Jan 30 13:54:11.069774 kernel: loop4: detected capacity change from 0 to 140768 Jan 30 13:54:11.097723 kernel: loop5: detected capacity change from 0 to 210664 Jan 30 13:54:11.116134 kernel: loop6: detected capacity change from 0 to 142488 Jan 30 13:54:11.151725 kernel: loop7: detected capacity change from 0 to 2976 Jan 30 13:54:11.171205 (sd-merge)[1238]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jan 30 13:54:11.172176 (sd-merge)[1238]: Merged extensions into '/usr'. Jan 30 13:54:11.174513 systemd[1]: Reloading requested from client PID 1204 ('systemd-sysext') (unit systemd-sysext.service)... Jan 30 13:54:11.174570 systemd[1]: Reloading... Jan 30 13:54:11.249020 zram_generator::config[1263]: No configuration found. Jan 30 13:54:11.324862 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 30 13:54:11.340444 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:54:11.377542 systemd[1]: Reloading finished in 202 ms. Jan 30 13:54:11.392533 ldconfig[1199]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 30 13:54:11.395210 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 30 13:54:11.395516 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 30 13:54:11.395845 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 30 13:54:11.401957 systemd[1]: Starting ensure-sysext.service... Jan 30 13:54:11.402861 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 30 13:54:11.404838 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 30 13:54:11.411198 systemd[1]: Reloading requested from client PID 1322 ('systemctl') (unit ensure-sysext.service)... Jan 30 13:54:11.411216 systemd[1]: Reloading... Jan 30 13:54:11.424749 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 30 13:54:11.424957 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 30 13:54:11.425448 systemd-tmpfiles[1323]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 30 13:54:11.425611 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 30 13:54:11.425647 systemd-tmpfiles[1323]: ACLs are not supported, ignoring. Jan 30 13:54:11.430351 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:54:11.430553 systemd-udevd[1324]: Using default interface naming scheme 'v255'. Jan 30 13:54:11.431081 systemd-tmpfiles[1323]: Skipping /boot Jan 30 13:54:11.437562 systemd-tmpfiles[1323]: Detected autofs mount point /boot during canonicalization of boot. Jan 30 13:54:11.437568 systemd-tmpfiles[1323]: Skipping /boot Jan 30 13:54:11.456726 zram_generator::config[1349]: No configuration found. Jan 30 13:54:11.546739 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 30 13:54:11.553527 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 30 13:54:11.567825 kernel: ACPI: button: Power Button [PWRF] Jan 30 13:54:11.573104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:54:11.614368 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 30 13:54:11.614920 systemd[1]: Reloading finished in 203 ms. Jan 30 13:54:11.621059 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 30 13:54:11.622313 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 30 13:54:11.625717 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jan 30 13:54:11.632797 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:54:11.634136 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 30 13:54:11.635951 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 30 13:54:11.640820 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 30 13:54:11.645824 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 30 13:54:11.647795 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 30 13:54:11.652861 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 30 13:54:11.653763 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:11.656863 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 30 13:54:11.658837 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 30 13:54:11.659848 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 30 13:54:11.659988 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:11.660052 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:11.662615 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:11.662698 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:11.662821 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:11.664703 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:11.665715 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jan 30 13:54:11.713213 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 30 13:54:11.713230 kernel: Guest personality initialized and is active Jan 30 13:54:11.667861 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 30 13:54:11.668516 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 30 13:54:11.668608 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 30 13:54:11.671295 systemd[1]: Finished ensure-sysext.service. Jan 30 13:54:11.680846 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 30 13:54:11.709881 (udev-worker)[1401]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jan 30 13:54:11.720853 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 30 13:54:11.720871 kernel: Initialized host personality Jan 30 13:54:11.717725 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 30 13:54:11.718350 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 30 13:54:11.718619 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 30 13:54:11.718933 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 30 13:54:11.719011 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 30 13:54:11.732900 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 30 13:54:11.733020 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 30 13:54:11.733221 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 30 13:54:11.733330 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 30 13:54:11.733582 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 30 13:54:11.733664 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 30 13:54:11.733882 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 30 13:54:11.733956 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 30 13:54:11.745035 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 30 13:54:11.745754 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1406) Jan 30 13:54:11.745078 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 30 13:54:11.746856 kernel: mousedev: PS/2 mouse device common for all mice Jan 30 13:54:11.752911 augenrules[1475]: No rules Jan 30 13:54:11.755093 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:54:11.759050 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 30 13:54:11.763771 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 30 13:54:11.776919 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 30 13:54:11.799872 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 30 13:54:11.806859 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 30 13:54:11.811308 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 30 13:54:11.817615 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 30 13:54:11.826865 systemd-networkd[1437]: lo: Link UP Jan 30 13:54:11.826869 systemd-networkd[1437]: lo: Gained carrier Jan 30 13:54:11.827614 systemd-networkd[1437]: Enumeration completed Jan 30 13:54:11.827680 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 30 13:54:11.833821 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 30 13:54:11.834770 systemd-networkd[1437]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jan 30 13:54:11.836723 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 30 13:54:11.836848 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 30 13:54:11.837159 systemd-networkd[1437]: ens192: Link UP Jan 30 13:54:11.837250 systemd-networkd[1437]: ens192: Gained carrier Jan 30 13:54:11.849770 lvm[1495]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:54:11.853193 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 30 13:54:11.853379 systemd[1]: Reached target time-set.target - System Time Set. Jan 30 13:54:11.855972 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 30 13:54:11.857033 systemd-resolved[1438]: Positive Trust Anchors: Jan 30 13:54:11.857199 systemd-resolved[1438]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 30 13:54:11.857264 systemd-resolved[1438]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 30 13:54:11.860116 systemd-resolved[1438]: Defaulting to hostname 'linux'. Jan 30 13:54:11.861333 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 30 13:54:11.861511 systemd[1]: Reached target network.target - Network. Jan 30 13:54:11.861603 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 30 13:54:11.875640 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 30 13:54:11.875865 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 30 13:54:11.882825 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 30 13:54:11.885885 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 30 13:54:11.886071 systemd[1]: Reached target sysinit.target - System Initialization. Jan 30 13:54:11.886241 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 30 13:54:11.886371 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 30 13:54:11.886571 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 30 13:54:11.886729 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 30 13:54:11.886839 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 30 13:54:11.886940 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 30 13:54:11.886954 systemd[1]: Reached target paths.target - Path Units. Jan 30 13:54:11.887036 systemd[1]: Reached target timers.target - Timer Units. Jan 30 13:54:11.887666 lvm[1504]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 30 13:54:11.887934 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 30 13:54:11.888905 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 30 13:54:11.891001 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 30 13:54:11.891482 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 30 13:54:11.891617 systemd[1]: Reached target sockets.target - Socket Units. Jan 30 13:54:11.891702 systemd[1]: Reached target basic.target - Basic System. Jan 30 13:54:11.892045 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:54:11.892103 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 30 13:54:11.894122 systemd[1]: Starting containerd.service - containerd container runtime... Jan 30 13:54:11.895511 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 30 13:54:11.898803 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 30 13:54:11.901895 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 30 13:54:11.902774 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 30 13:54:11.903822 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 30 13:54:11.904306 jq[1510]: false Jan 30 13:54:11.911164 dbus-daemon[1509]: [system] SELinux support is enabled Jan 30 13:54:11.911790 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 30 13:54:11.916815 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 30 13:54:11.917781 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 30 13:54:11.921283 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 30 13:54:11.921968 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 30 13:54:11.922390 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 30 13:54:11.924819 systemd[1]: Starting update-engine.service - Update Engine... Jan 30 13:54:11.926047 extend-filesystems[1511]: Found loop4 Jan 30 13:54:11.926291 extend-filesystems[1511]: Found loop5 Jan 30 13:54:11.926424 extend-filesystems[1511]: Found loop6 Jan 30 13:54:11.926548 extend-filesystems[1511]: Found loop7 Jan 30 13:54:11.926667 extend-filesystems[1511]: Found sda Jan 30 13:54:11.926809 extend-filesystems[1511]: Found sda1 Jan 30 13:54:11.926930 extend-filesystems[1511]: Found sda2 Jan 30 13:54:11.927049 extend-filesystems[1511]: Found sda3 Jan 30 13:54:11.927172 extend-filesystems[1511]: Found usr Jan 30 13:54:11.927295 extend-filesystems[1511]: Found sda4 Jan 30 13:54:11.927418 extend-filesystems[1511]: Found sda6 Jan 30 13:54:11.927537 extend-filesystems[1511]: Found sda7 Jan 30 13:54:11.927654 extend-filesystems[1511]: Found sda9 Jan 30 13:54:11.927832 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 30 13:54:11.927918 extend-filesystems[1511]: Checking size of /dev/sda9 Jan 30 13:54:11.930778 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jan 30 13:54:11.931144 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 30 13:54:11.934090 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 30 13:54:11.936651 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 30 13:54:11.936782 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 30 13:54:11.939000 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 30 13:54:11.939093 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 30 13:54:11.943322 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 30 13:54:11.943345 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 30 13:54:11.943494 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 30 13:54:11.943508 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 30 13:54:11.954642 systemd[1]: motdgen.service: Deactivated successfully. Jan 30 13:54:11.954766 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 30 13:54:11.963325 extend-filesystems[1511]: Old size kept for /dev/sda9 Jan 30 13:54:11.963325 extend-filesystems[1511]: Found sr0 Jan 30 13:54:11.963488 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 30 13:54:11.963593 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 30 13:54:11.966080 jq[1521]: true Jan 30 13:54:11.973830 (ntainerd)[1541]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 30 13:54:11.980857 tar[1530]: linux-amd64/helm Jan 30 13:54:11.978812 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jan 30 13:54:11.984915 jq[1547]: true Jan 30 13:54:11.984605 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jan 30 13:54:11.986448 systemd-logind[1518]: Watching system buttons on /dev/input/event1 (Power Button) Jan 30 13:54:11.986462 systemd-logind[1518]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 30 13:54:11.986784 systemd-logind[1518]: New seat seat0. Jan 30 13:54:11.988643 update_engine[1520]: I20250130 13:54:11.987478 1520 main.cc:92] Flatcar Update Engine starting Jan 30 13:54:11.988427 systemd[1]: Started systemd-logind.service - User Login Management. Jan 30 13:54:11.992775 systemd[1]: Started update-engine.service - Update Engine. Jan 30 13:54:11.993566 update_engine[1520]: I20250130 13:54:11.993397 1520 update_check_scheduler.cc:74] Next update check in 5m23s Jan 30 13:54:11.999552 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 30 13:54:12.011033 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jan 30 13:54:12.016649 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1388) Jan 30 13:54:12.039857 unknown[1550]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jan 30 13:54:12.042416 unknown[1550]: Core dump limit set to -1 Jan 30 13:54:12.047718 kernel: NET: Registered PF_VSOCK protocol family Jan 30 13:54:12.077488 bash[1573]: Updated "/home/core/.ssh/authorized_keys" Jan 30 13:54:12.079927 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 30 13:54:12.080836 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 30 13:54:12.138336 sshd_keygen[1543]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 30 13:54:12.166319 locksmithd[1553]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 30 13:54:12.177397 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 30 13:54:12.193498 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 30 13:54:12.201132 systemd[1]: issuegen.service: Deactivated successfully. Jan 30 13:54:12.201250 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 30 13:54:12.206835 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 30 13:54:12.219471 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 30 13:54:12.222976 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 30 13:54:12.225461 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 30 13:54:12.227063 systemd[1]: Reached target getty.target - Login Prompts. Jan 30 13:54:12.262680 containerd[1541]: time="2025-01-30T13:54:12.262478532Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 30 13:54:12.282338 containerd[1541]: time="2025-01-30T13:54:12.282306938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283234 containerd[1541]: time="2025-01-30T13:54:12.283214036Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283234 containerd[1541]: time="2025-01-30T13:54:12.283231541Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 30 13:54:12.283281 containerd[1541]: time="2025-01-30T13:54:12.283242422Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 30 13:54:12.283343 containerd[1541]: time="2025-01-30T13:54:12.283331987Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 30 13:54:12.283364 containerd[1541]: time="2025-01-30T13:54:12.283344717Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283390 containerd[1541]: time="2025-01-30T13:54:12.283378777Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283406 containerd[1541]: time="2025-01-30T13:54:12.283388739Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283493 containerd[1541]: time="2025-01-30T13:54:12.283481156Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283514 containerd[1541]: time="2025-01-30T13:54:12.283492364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283514 containerd[1541]: time="2025-01-30T13:54:12.283501019Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283514 containerd[1541]: time="2025-01-30T13:54:12.283506606Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283555 containerd[1541]: time="2025-01-30T13:54:12.283545535Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283667 containerd[1541]: time="2025-01-30T13:54:12.283656781Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283734 containerd[1541]: time="2025-01-30T13:54:12.283720855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 30 13:54:12.283734 containerd[1541]: time="2025-01-30T13:54:12.283732573Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 30 13:54:12.283985 containerd[1541]: time="2025-01-30T13:54:12.283773792Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 30 13:54:12.283985 containerd[1541]: time="2025-01-30T13:54:12.283801761Z" level=info msg="metadata content store policy set" policy=shared Jan 30 13:54:12.303793 containerd[1541]: time="2025-01-30T13:54:12.303758077Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 30 13:54:12.303876 containerd[1541]: time="2025-01-30T13:54:12.303814314Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 30 13:54:12.303876 containerd[1541]: time="2025-01-30T13:54:12.303826617Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 30 13:54:12.303876 containerd[1541]: time="2025-01-30T13:54:12.303837000Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 30 13:54:12.303876 containerd[1541]: time="2025-01-30T13:54:12.303848030Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.303949100Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304088610Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304145675Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304156012Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304163201Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304170309Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304177853Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304184312Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304191847Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304200242Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304208038Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304215026Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304221170Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 30 13:54:12.304309 containerd[1541]: time="2025-01-30T13:54:12.304232380Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304239880Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304247041Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304254511Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304261310Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304273821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304281612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304288755Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304302107Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304315137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304324060Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304331285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304337964Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304348787Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304362003Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304502 containerd[1541]: time="2025-01-30T13:54:12.304368627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304375417Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304405141Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304416346Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304422736Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304429288Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304434751Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304441539Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304447221Z" level=info msg="NRI interface is disabled by configuration." Jan 30 13:54:12.304697 containerd[1541]: time="2025-01-30T13:54:12.304453219Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 30 13:54:12.304841 containerd[1541]: time="2025-01-30T13:54:12.304617126Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 30 13:54:12.304841 containerd[1541]: time="2025-01-30T13:54:12.304651424Z" level=info msg="Connect containerd service" Jan 30 13:54:12.304841 containerd[1541]: time="2025-01-30T13:54:12.304675727Z" level=info msg="using legacy CRI server" Jan 30 13:54:12.304841 containerd[1541]: time="2025-01-30T13:54:12.304680244Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 30 13:54:12.304841 containerd[1541]: time="2025-01-30T13:54:12.304745713Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305224165Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305430695Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305491322Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305513898Z" level=info msg="Start subscribing containerd event" Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305545885Z" level=info msg="Start recovering state" Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305579641Z" level=info msg="Start event monitor" Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305624101Z" level=info msg="Start snapshots syncer" Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305631897Z" level=info msg="Start cni network conf syncer for default" Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.305635947Z" level=info msg="Start streaming server" Jan 30 13:54:12.306188 containerd[1541]: time="2025-01-30T13:54:12.306059952Z" level=info msg="containerd successfully booted in 0.044505s" Jan 30 13:54:12.305735 systemd[1]: Started containerd.service - containerd container runtime. Jan 30 13:55:25.280854 systemd-resolved[1438]: Clock change detected. Flushing caches. Jan 30 13:55:25.281007 systemd-timesyncd[1448]: Contacted time server 24.111.79.186:123 (0.flatcar.pool.ntp.org). Jan 30 13:55:25.281043 systemd-timesyncd[1448]: Initial clock synchronization to Thu 2025-01-30 13:55:25.280580 UTC. Jan 30 13:55:25.364372 tar[1530]: linux-amd64/LICENSE Jan 30 13:55:25.364372 tar[1530]: linux-amd64/README.md Jan 30 13:55:25.378609 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 30 13:55:26.041405 systemd-networkd[1437]: ens192: Gained IPv6LL Jan 30 13:55:26.043679 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 30 13:55:26.044069 systemd[1]: Reached target network-online.target - Network is Online. Jan 30 13:55:26.049410 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jan 30 13:55:26.050649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:26.053374 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 30 13:55:26.070762 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 30 13:55:26.083170 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 30 13:55:26.083306 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jan 30 13:55:26.083674 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 30 13:55:27.731618 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:27.732013 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 30 13:55:27.732166 systemd[1]: Startup finished in 1.008s (kernel) + 10.237s (initrd) + 4.966s (userspace) = 16.212s. Jan 30 13:55:27.738649 (kubelet)[1688]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:27.993788 login[1617]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:55:27.995136 login[1618]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 30 13:55:28.000588 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 30 13:55:28.005379 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 30 13:55:28.006873 systemd-logind[1518]: New session 2 of user core. Jan 30 13:55:28.009060 systemd-logind[1518]: New session 1 of user core. Jan 30 13:55:28.031293 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 30 13:55:28.036532 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 30 13:55:28.044388 (systemd)[1695]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 30 13:55:28.182825 systemd[1695]: Queued start job for default target default.target. Jan 30 13:55:28.191199 systemd[1695]: Created slice app.slice - User Application Slice. Jan 30 13:55:28.191327 systemd[1695]: Reached target paths.target - Paths. Jan 30 13:55:28.191339 systemd[1695]: Reached target timers.target - Timers. Jan 30 13:55:28.193010 systemd[1695]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 30 13:55:28.200750 systemd[1695]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 30 13:55:28.200798 systemd[1695]: Reached target sockets.target - Sockets. Jan 30 13:55:28.200809 systemd[1695]: Reached target basic.target - Basic System. Jan 30 13:55:28.200843 systemd[1695]: Reached target default.target - Main User Target. Jan 30 13:55:28.200862 systemd[1695]: Startup finished in 152ms. Jan 30 13:55:28.200979 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 30 13:55:28.209387 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 30 13:55:28.210268 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 30 13:55:30.421759 kubelet[1688]: E0130 13:55:30.421723 1688 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:30.423460 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:30.423547 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:40.610660 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 30 13:55:40.621400 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:40.679690 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:40.682485 (kubelet)[1738]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:40.714106 kubelet[1738]: E0130 13:55:40.714067 1738 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:40.716701 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:40.716789 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:55:50.860684 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 30 13:55:50.870312 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:55:51.156309 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:55:51.159644 (kubelet)[1756]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:55:51.186995 kubelet[1756]: E0130 13:55:51.186963 1756 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:55:51.188389 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:55:51.188479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:56:01.360822 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 30 13:56:01.368348 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:56:01.695383 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:56:01.698298 (kubelet)[1773]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:56:01.732322 kubelet[1773]: E0130 13:56:01.732288 1773 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:56:01.733461 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:56:01.733540 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:56:05.117041 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 30 13:56:05.118238 systemd[1]: Started sshd@0-139.178.70.103:22-139.178.68.195:41000.service - OpenSSH per-connection server daemon (139.178.68.195:41000). Jan 30 13:56:05.154565 sshd[1782]: Accepted publickey for core from 139.178.68.195 port 41000 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:56:05.155257 sshd[1782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:05.157489 systemd-logind[1518]: New session 3 of user core. Jan 30 13:56:05.164300 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 30 13:56:05.217300 systemd[1]: Started sshd@1-139.178.70.103:22-139.178.68.195:41004.service - OpenSSH per-connection server daemon (139.178.68.195:41004). Jan 30 13:56:05.244064 sshd[1787]: Accepted publickey for core from 139.178.68.195 port 41004 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:56:05.244814 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:05.247989 systemd-logind[1518]: New session 4 of user core. Jan 30 13:56:05.253301 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 30 13:56:05.303490 sshd[1787]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:05.310782 systemd[1]: sshd@1-139.178.70.103:22-139.178.68.195:41004.service: Deactivated successfully. Jan 30 13:56:05.311757 systemd[1]: session-4.scope: Deactivated successfully. Jan 30 13:56:05.312631 systemd-logind[1518]: Session 4 logged out. Waiting for processes to exit. Jan 30 13:56:05.313486 systemd[1]: Started sshd@2-139.178.70.103:22-139.178.68.195:41016.service - OpenSSH per-connection server daemon (139.178.68.195:41016). Jan 30 13:56:05.315439 systemd-logind[1518]: Removed session 4. Jan 30 13:56:05.341271 sshd[1794]: Accepted publickey for core from 139.178.68.195 port 41016 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:56:05.341981 sshd[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:05.345088 systemd-logind[1518]: New session 5 of user core. Jan 30 13:56:05.350308 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 30 13:56:05.396542 sshd[1794]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:05.405288 systemd[1]: sshd@2-139.178.70.103:22-139.178.68.195:41016.service: Deactivated successfully. Jan 30 13:56:05.406478 systemd[1]: session-5.scope: Deactivated successfully. Jan 30 13:56:05.407655 systemd-logind[1518]: Session 5 logged out. Waiting for processes to exit. Jan 30 13:56:05.417432 systemd[1]: Started sshd@3-139.178.70.103:22-139.178.68.195:41022.service - OpenSSH per-connection server daemon (139.178.68.195:41022). Jan 30 13:56:05.419432 systemd-logind[1518]: Removed session 5. Jan 30 13:56:05.442949 sshd[1801]: Accepted publickey for core from 139.178.68.195 port 41022 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:56:05.443728 sshd[1801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:05.446191 systemd-logind[1518]: New session 6 of user core. Jan 30 13:56:05.456331 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 30 13:56:05.506675 sshd[1801]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:05.512015 systemd[1]: sshd@3-139.178.70.103:22-139.178.68.195:41022.service: Deactivated successfully. Jan 30 13:56:05.513158 systemd[1]: session-6.scope: Deactivated successfully. Jan 30 13:56:05.514309 systemd-logind[1518]: Session 6 logged out. Waiting for processes to exit. Jan 30 13:56:05.515404 systemd[1]: Started sshd@4-139.178.70.103:22-139.178.68.195:41032.service - OpenSSH per-connection server daemon (139.178.68.195:41032). Jan 30 13:56:05.517536 systemd-logind[1518]: Removed session 6. Jan 30 13:56:05.546194 sshd[1808]: Accepted publickey for core from 139.178.68.195 port 41032 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:56:05.546996 sshd[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:05.549587 systemd-logind[1518]: New session 7 of user core. Jan 30 13:56:05.556347 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 30 13:56:05.621402 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 30 13:56:05.621639 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:56:05.631971 sudo[1811]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:05.633937 sshd[1808]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:05.638145 systemd[1]: sshd@4-139.178.70.103:22-139.178.68.195:41032.service: Deactivated successfully. Jan 30 13:56:05.639751 systemd[1]: session-7.scope: Deactivated successfully. Jan 30 13:56:05.641351 systemd-logind[1518]: Session 7 logged out. Waiting for processes to exit. Jan 30 13:56:05.645413 systemd[1]: Started sshd@5-139.178.70.103:22-139.178.68.195:41036.service - OpenSSH per-connection server daemon (139.178.68.195:41036). Jan 30 13:56:05.646291 systemd-logind[1518]: Removed session 7. Jan 30 13:56:05.671670 sshd[1816]: Accepted publickey for core from 139.178.68.195 port 41036 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:56:05.672644 sshd[1816]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:05.675062 systemd-logind[1518]: New session 8 of user core. Jan 30 13:56:05.683329 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 30 13:56:05.731014 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 30 13:56:05.731395 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:56:05.733459 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:05.736752 sudo[1819]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 30 13:56:05.736935 sudo[1819]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:56:05.746424 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 30 13:56:05.747336 auditctl[1823]: No rules Jan 30 13:56:05.747643 systemd[1]: audit-rules.service: Deactivated successfully. Jan 30 13:56:05.747777 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 30 13:56:05.749557 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 30 13:56:05.766617 augenrules[1841]: No rules Jan 30 13:56:05.766992 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 30 13:56:05.767875 sudo[1819]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:05.768870 sshd[1816]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:05.773763 systemd[1]: sshd@5-139.178.70.103:22-139.178.68.195:41036.service: Deactivated successfully. Jan 30 13:56:05.774596 systemd[1]: session-8.scope: Deactivated successfully. Jan 30 13:56:05.775376 systemd-logind[1518]: Session 8 logged out. Waiting for processes to exit. Jan 30 13:56:05.777412 systemd[1]: Started sshd@6-139.178.70.103:22-139.178.68.195:41042.service - OpenSSH per-connection server daemon (139.178.68.195:41042). Jan 30 13:56:05.778298 systemd-logind[1518]: Removed session 8. Jan 30 13:56:05.803882 sshd[1849]: Accepted publickey for core from 139.178.68.195 port 41042 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:56:05.804613 sshd[1849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:56:05.807794 systemd-logind[1518]: New session 9 of user core. Jan 30 13:56:05.814350 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 30 13:56:05.862826 sudo[1852]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 30 13:56:05.862996 sudo[1852]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 30 13:56:06.415397 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 30 13:56:06.415477 (dockerd)[1868]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 30 13:56:06.777989 dockerd[1868]: time="2025-01-30T13:56:06.777803890Z" level=info msg="Starting up" Jan 30 13:56:06.871198 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1617568720-merged.mount: Deactivated successfully. Jan 30 13:56:06.889845 systemd[1]: var-lib-docker-metacopy\x2dcheck1515593958-merged.mount: Deactivated successfully. Jan 30 13:56:06.900995 dockerd[1868]: time="2025-01-30T13:56:06.900956919Z" level=info msg="Loading containers: start." Jan 30 13:56:06.992238 kernel: Initializing XFRM netlink socket Jan 30 13:56:07.054627 systemd-networkd[1437]: docker0: Link UP Jan 30 13:56:07.080520 dockerd[1868]: time="2025-01-30T13:56:07.080494954Z" level=info msg="Loading containers: done." Jan 30 13:56:07.091935 dockerd[1868]: time="2025-01-30T13:56:07.091842730Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 30 13:56:07.091935 dockerd[1868]: time="2025-01-30T13:56:07.091923521Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 30 13:56:07.092063 dockerd[1868]: time="2025-01-30T13:56:07.091992703Z" level=info msg="Daemon has completed initialization" Jan 30 13:56:07.113656 dockerd[1868]: time="2025-01-30T13:56:07.113605497Z" level=info msg="API listen on /run/docker.sock" Jan 30 13:56:07.113920 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 30 13:56:08.423601 containerd[1541]: time="2025-01-30T13:56:08.423554095Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\"" Jan 30 13:56:09.099149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1824060169.mount: Deactivated successfully. Jan 30 13:56:10.134267 containerd[1541]: time="2025-01-30T13:56:10.134189397Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:10.134818 containerd[1541]: time="2025-01-30T13:56:10.134792350Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.9: active requests=0, bytes read=32677012" Jan 30 13:56:10.135146 containerd[1541]: time="2025-01-30T13:56:10.135009918Z" level=info msg="ImageCreate event name:\"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:10.136469 containerd[1541]: time="2025-01-30T13:56:10.136445192Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:10.137336 containerd[1541]: time="2025-01-30T13:56:10.137042133Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.9\" with image id \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.9\", repo digest \"registry.k8s.io/kube-apiserver@sha256:540de8f810ac963b8ed93f7393a8746d68e7e8a2c79ea58ff409ac5b9ca6a9fc\", size \"32673812\" in 1.713461191s" Jan 30 13:56:10.137336 containerd[1541]: time="2025-01-30T13:56:10.137060287Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.9\" returns image reference \"sha256:4f53be91109c4dd4658bb0141e8af556b94293ec9fad72b2b62a617edb48e5c4\"" Jan 30 13:56:10.151218 containerd[1541]: time="2025-01-30T13:56:10.151192381Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\"" Jan 30 13:56:10.523803 update_engine[1520]: I20250130 13:56:10.523679 1520 update_attempter.cc:509] Updating boot flags... Jan 30 13:56:10.551256 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2081) Jan 30 13:56:10.583276 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2082) Jan 30 13:56:11.440737 containerd[1541]: time="2025-01-30T13:56:11.440703949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:11.443232 containerd[1541]: time="2025-01-30T13:56:11.442857538Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.9: active requests=0, bytes read=29605745" Jan 30 13:56:11.443513 containerd[1541]: time="2025-01-30T13:56:11.443497901Z" level=info msg="ImageCreate event name:\"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:11.446672 containerd[1541]: time="2025-01-30T13:56:11.446653675Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:11.447105 containerd[1541]: time="2025-01-30T13:56:11.447086906Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.9\" with image id \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.9\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6350693c04956b13db2519e01ca12a0bbe58466e9f12ef8617f1429da6081f43\", size \"31052327\" in 1.295873558s" Jan 30 13:56:11.447143 containerd[1541]: time="2025-01-30T13:56:11.447109792Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.9\" returns image reference \"sha256:d4203c1bb2593a7429c3df3c040da333190e5d7e01f377d0255b7b813ca09568\"" Jan 30 13:56:11.461649 containerd[1541]: time="2025-01-30T13:56:11.461577194Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\"" Jan 30 13:56:11.860838 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 30 13:56:11.867325 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:56:11.938303 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:56:11.940239 (kubelet)[2106]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 30 13:56:11.974571 kubelet[2106]: E0130 13:56:11.974537 2106 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 30 13:56:11.976933 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 30 13:56:11.977013 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 30 13:56:12.739668 containerd[1541]: time="2025-01-30T13:56:12.739639099Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:12.740452 containerd[1541]: time="2025-01-30T13:56:12.740420657Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.9: active requests=0, bytes read=17783064" Jan 30 13:56:12.740857 containerd[1541]: time="2025-01-30T13:56:12.740842102Z" level=info msg="ImageCreate event name:\"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:12.742229 containerd[1541]: time="2025-01-30T13:56:12.742190094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:12.743024 containerd[1541]: time="2025-01-30T13:56:12.742810310Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.9\" with image id \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.9\", repo digest \"registry.k8s.io/kube-scheduler@sha256:153efd6dc89e61a38ef273cf4c4cebd2bfee68082c2ee3d4fab5da94e4ae13d3\", size \"19229664\" in 1.281211187s" Jan 30 13:56:12.743024 containerd[1541]: time="2025-01-30T13:56:12.742830747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.9\" returns image reference \"sha256:41cce68b0c8c3c4862ff55ac17be57616cce36a04e719aee733e5c7c1a24b725\"" Jan 30 13:56:12.755766 containerd[1541]: time="2025-01-30T13:56:12.755746695Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\"" Jan 30 13:56:13.605499 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2695678599.mount: Deactivated successfully. Jan 30 13:56:13.931036 containerd[1541]: time="2025-01-30T13:56:13.930995409Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:13.936277 containerd[1541]: time="2025-01-30T13:56:13.936157974Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.9: active requests=0, bytes read=29058337" Jan 30 13:56:13.938799 containerd[1541]: time="2025-01-30T13:56:13.938665456Z" level=info msg="ImageCreate event name:\"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:13.952164 containerd[1541]: time="2025-01-30T13:56:13.952136203Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:13.952602 containerd[1541]: time="2025-01-30T13:56:13.952398162Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.9\" with image id \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\", repo tag \"registry.k8s.io/kube-proxy:v1.30.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:d78dc40d97ff862fd8ddb47f80a5ba3feec17bc73e58a60e963885e33faa0083\", size \"29057356\" in 1.196542309s" Jan 30 13:56:13.952602 containerd[1541]: time="2025-01-30T13:56:13.952424251Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.9\" returns image reference \"sha256:4c369683c359609256b8907f424fc2355f1e7e3eeb7295b1fd8ffc5304f4cede\"" Jan 30 13:56:13.966759 containerd[1541]: time="2025-01-30T13:56:13.966731519Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jan 30 13:56:14.512085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount913506676.mount: Deactivated successfully. Jan 30 13:56:15.278583 containerd[1541]: time="2025-01-30T13:56:15.277939862Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:15.278583 containerd[1541]: time="2025-01-30T13:56:15.278369026Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185761" Jan 30 13:56:15.278583 containerd[1541]: time="2025-01-30T13:56:15.278553979Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:15.280167 containerd[1541]: time="2025-01-30T13:56:15.280154372Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:15.280814 containerd[1541]: time="2025-01-30T13:56:15.280796696Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.314041377s" Jan 30 13:56:15.280844 containerd[1541]: time="2025-01-30T13:56:15.280817240Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jan 30 13:56:15.293775 containerd[1541]: time="2025-01-30T13:56:15.293747504Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jan 30 13:56:15.872936 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1763531857.mount: Deactivated successfully. Jan 30 13:56:15.874989 containerd[1541]: time="2025-01-30T13:56:15.874497214Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:15.874989 containerd[1541]: time="2025-01-30T13:56:15.874890753Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322290" Jan 30 13:56:15.874989 containerd[1541]: time="2025-01-30T13:56:15.874969799Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:15.876246 containerd[1541]: time="2025-01-30T13:56:15.876193181Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:15.876963 containerd[1541]: time="2025-01-30T13:56:15.876673038Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 582.899806ms" Jan 30 13:56:15.876963 containerd[1541]: time="2025-01-30T13:56:15.876691190Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jan 30 13:56:15.889322 containerd[1541]: time="2025-01-30T13:56:15.889304000Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jan 30 13:56:16.423946 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1669885441.mount: Deactivated successfully. Jan 30 13:56:18.804288 containerd[1541]: time="2025-01-30T13:56:18.803667920Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:18.804288 containerd[1541]: time="2025-01-30T13:56:18.804094000Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238571" Jan 30 13:56:18.805159 containerd[1541]: time="2025-01-30T13:56:18.804355149Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:18.806072 containerd[1541]: time="2025-01-30T13:56:18.806042870Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:18.806709 containerd[1541]: time="2025-01-30T13:56:18.806692807Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 2.917268269s" Jan 30 13:56:18.806742 containerd[1541]: time="2025-01-30T13:56:18.806711068Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jan 30 13:56:20.912046 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:56:20.916344 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:56:20.927953 systemd[1]: Reloading requested from client PID 2299 ('systemctl') (unit session-9.scope)... Jan 30 13:56:20.928027 systemd[1]: Reloading... Jan 30 13:56:20.996229 zram_generator::config[2336]: No configuration found. Jan 30 13:56:21.052901 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 30 13:56:21.068226 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:56:21.111460 systemd[1]: Reloading finished in 183 ms. Jan 30 13:56:21.137567 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 30 13:56:21.137614 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 30 13:56:21.137755 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:56:21.142410 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:56:21.462936 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:56:21.466591 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:56:21.507673 kubelet[2403]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:56:21.507673 kubelet[2403]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:56:21.507673 kubelet[2403]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:56:21.521555 kubelet[2403]: I0130 13:56:21.521521 2403 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:56:21.818321 kubelet[2403]: I0130 13:56:21.818263 2403 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:56:21.818321 kubelet[2403]: I0130 13:56:21.818280 2403 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:56:21.818558 kubelet[2403]: I0130 13:56:21.818405 2403 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:56:21.950141 kubelet[2403]: I0130 13:56:21.949929 2403 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:56:21.952731 kubelet[2403]: E0130 13:56:21.952588 2403 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.961621 kubelet[2403]: I0130 13:56:21.961608 2403 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:56:21.961773 kubelet[2403]: I0130 13:56:21.961752 2403 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:56:21.962893 kubelet[2403]: I0130 13:56:21.961774 2403 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:56:21.962967 kubelet[2403]: I0130 13:56:21.962905 2403 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:56:21.962967 kubelet[2403]: I0130 13:56:21.962914 2403 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:56:21.963006 kubelet[2403]: I0130 13:56:21.962988 2403 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:56:21.963756 kubelet[2403]: I0130 13:56:21.963744 2403 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:56:21.963781 kubelet[2403]: I0130 13:56:21.963757 2403 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:56:21.964096 kubelet[2403]: W0130 13:56:21.964071 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.964119 kubelet[2403]: E0130 13:56:21.964106 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.964349 kubelet[2403]: I0130 13:56:21.964337 2403 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:56:21.964376 kubelet[2403]: I0130 13:56:21.964358 2403 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:56:21.967252 kubelet[2403]: W0130 13:56:21.967173 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.967252 kubelet[2403]: E0130 13:56:21.967195 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.967636 kubelet[2403]: I0130 13:56:21.967476 2403 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:56:21.969126 kubelet[2403]: I0130 13:56:21.968713 2403 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:56:21.969126 kubelet[2403]: W0130 13:56:21.968757 2403 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 30 13:56:21.969126 kubelet[2403]: I0130 13:56:21.969092 2403 server.go:1264] "Started kubelet" Jan 30 13:56:21.970968 kubelet[2403]: I0130 13:56:21.970431 2403 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:56:21.970968 kubelet[2403]: I0130 13:56:21.970610 2403 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:56:21.975882 kubelet[2403]: I0130 13:56:21.975869 2403 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:56:21.976995 kubelet[2403]: I0130 13:56:21.976908 2403 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:56:21.981305 kubelet[2403]: I0130 13:56:21.981285 2403 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:56:21.981452 kubelet[2403]: W0130 13:56:21.981423 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.981452 kubelet[2403]: E0130 13:56:21.981457 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.981922 kubelet[2403]: I0130 13:56:21.978837 2403 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:56:21.982589 kubelet[2403]: E0130 13:56:21.982475 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="200ms" Jan 30 13:56:21.983492 kubelet[2403]: I0130 13:56:21.977316 2403 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:56:21.984118 kubelet[2403]: E0130 13:56:21.983854 2403 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7cf71cbae993 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:56:21.969078675 +0000 UTC m=+0.500459359,LastTimestamp:2025-01-30 13:56:21.969078675 +0000 UTC m=+0.500459359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:56:21.984118 kubelet[2403]: I0130 13:56:21.984001 2403 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:56:21.984675 kubelet[2403]: I0130 13:56:21.984452 2403 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:56:21.984737 kubelet[2403]: I0130 13:56:21.984725 2403 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:56:21.984764 kubelet[2403]: I0130 13:56:21.984739 2403 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:56:21.988397 kubelet[2403]: I0130 13:56:21.988378 2403 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:56:21.989135 kubelet[2403]: I0130 13:56:21.989126 2403 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:56:21.989187 kubelet[2403]: I0130 13:56:21.989182 2403 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:56:21.989274 kubelet[2403]: I0130 13:56:21.989268 2403 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:56:21.989332 kubelet[2403]: E0130 13:56:21.989323 2403 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:56:21.993919 kubelet[2403]: W0130 13:56:21.993898 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.993995 kubelet[2403]: E0130 13:56:21.993988 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:21.994402 kubelet[2403]: E0130 13:56:21.994392 2403 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:56:22.007851 kubelet[2403]: I0130 13:56:22.007837 2403 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:56:22.007851 kubelet[2403]: I0130 13:56:22.007848 2403 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:56:22.007984 kubelet[2403]: I0130 13:56:22.007857 2403 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:56:22.008761 kubelet[2403]: I0130 13:56:22.008751 2403 policy_none.go:49] "None policy: Start" Jan 30 13:56:22.009061 kubelet[2403]: I0130 13:56:22.009052 2403 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:56:22.009091 kubelet[2403]: I0130 13:56:22.009063 2403 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:56:22.013965 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 30 13:56:22.030828 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 30 13:56:22.045113 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 30 13:56:22.045779 kubelet[2403]: I0130 13:56:22.045768 2403 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:56:22.045955 kubelet[2403]: I0130 13:56:22.045935 2403 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:56:22.046066 kubelet[2403]: I0130 13:56:22.046036 2403 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:56:22.047494 kubelet[2403]: E0130 13:56:22.047435 2403 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 30 13:56:22.080449 kubelet[2403]: I0130 13:56:22.080240 2403 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:56:22.080449 kubelet[2403]: E0130 13:56:22.080423 2403 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jan 30 13:56:22.089863 kubelet[2403]: I0130 13:56:22.089835 2403 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:56:22.090590 kubelet[2403]: I0130 13:56:22.090517 2403 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:56:22.091443 kubelet[2403]: I0130 13:56:22.091435 2403 topology_manager.go:215] "Topology Admit Handler" podUID="d2b7d53a13200fba6c90004888f6790a" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:56:22.094823 systemd[1]: Created slice kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice - libcontainer container kubepods-burstable-pod9b8b5886141f9311660bb6b224a0f76c.slice. Jan 30 13:56:22.100732 systemd[1]: Created slice kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice - libcontainer container kubepods-burstable-pod4b186e12ac9f083392bb0d1970b49be4.slice. Jan 30 13:56:22.104395 systemd[1]: Created slice kubepods-burstable-podd2b7d53a13200fba6c90004888f6790a.slice - libcontainer container kubepods-burstable-podd2b7d53a13200fba6c90004888f6790a.slice. Jan 30 13:56:22.182980 kubelet[2403]: E0130 13:56:22.182954 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="400ms" Jan 30 13:56:22.185433 kubelet[2403]: I0130 13:56:22.185420 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2b7d53a13200fba6c90004888f6790a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2b7d53a13200fba6c90004888f6790a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:56:22.185522 kubelet[2403]: I0130 13:56:22.185513 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2b7d53a13200fba6c90004888f6790a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d2b7d53a13200fba6c90004888f6790a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:56:22.185584 kubelet[2403]: I0130 13:56:22.185569 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:22.185705 kubelet[2403]: I0130 13:56:22.185622 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:56:22.185705 kubelet[2403]: I0130 13:56:22.185641 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:22.185705 kubelet[2403]: I0130 13:56:22.185652 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:22.185705 kubelet[2403]: I0130 13:56:22.185661 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2b7d53a13200fba6c90004888f6790a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2b7d53a13200fba6c90004888f6790a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:56:22.185705 kubelet[2403]: I0130 13:56:22.185671 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:22.185791 kubelet[2403]: I0130 13:56:22.185679 2403 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:22.281617 kubelet[2403]: I0130 13:56:22.281415 2403 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:56:22.281617 kubelet[2403]: E0130 13:56:22.281599 2403 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jan 30 13:56:22.400237 containerd[1541]: time="2025-01-30T13:56:22.400166760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:22.408578 containerd[1541]: time="2025-01-30T13:56:22.408547542Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d2b7d53a13200fba6c90004888f6790a,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:22.408703 containerd[1541]: time="2025-01-30T13:56:22.408547735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:22.583863 kubelet[2403]: E0130 13:56:22.583835 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="800ms" Jan 30 13:56:22.683927 kubelet[2403]: I0130 13:56:22.683354 2403 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:56:22.683927 kubelet[2403]: E0130 13:56:22.683545 2403 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jan 30 13:56:22.905902 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778334118.mount: Deactivated successfully. Jan 30 13:56:22.908301 containerd[1541]: time="2025-01-30T13:56:22.908274184Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:56:22.908804 containerd[1541]: time="2025-01-30T13:56:22.908750509Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 30 13:56:22.909403 containerd[1541]: time="2025-01-30T13:56:22.909335774Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:56:22.909918 containerd[1541]: time="2025-01-30T13:56:22.909828765Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:56:22.910118 containerd[1541]: time="2025-01-30T13:56:22.910106314Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 30 13:56:22.910260 containerd[1541]: time="2025-01-30T13:56:22.910237683Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:56:22.912668 containerd[1541]: time="2025-01-30T13:56:22.912649305Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:56:22.913864 containerd[1541]: time="2025-01-30T13:56:22.913186748Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 504.580184ms" Jan 30 13:56:22.914055 containerd[1541]: time="2025-01-30T13:56:22.914037329Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 505.324006ms" Jan 30 13:56:22.916410 containerd[1541]: time="2025-01-30T13:56:22.916390287Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 30 13:56:22.917243 containerd[1541]: time="2025-01-30T13:56:22.917200213Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 516.96064ms" Jan 30 13:56:23.021504 containerd[1541]: time="2025-01-30T13:56:23.021179988Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:23.021504 containerd[1541]: time="2025-01-30T13:56:23.021253832Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:23.021504 containerd[1541]: time="2025-01-30T13:56:23.021264627Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:23.022116 containerd[1541]: time="2025-01-30T13:56:23.022072316Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:23.022569 containerd[1541]: time="2025-01-30T13:56:23.022300768Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:23.022569 containerd[1541]: time="2025-01-30T13:56:23.022349655Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:23.022569 containerd[1541]: time="2025-01-30T13:56:23.022361251Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:23.022569 containerd[1541]: time="2025-01-30T13:56:23.022417283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:23.023144 containerd[1541]: time="2025-01-30T13:56:23.023105565Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:23.023235 containerd[1541]: time="2025-01-30T13:56:23.023134095Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:23.023235 containerd[1541]: time="2025-01-30T13:56:23.023144549Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:23.023235 containerd[1541]: time="2025-01-30T13:56:23.023187474Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:23.042395 systemd[1]: Started cri-containerd-3cdef1a68d9f93f0815a0e141c2d380c7e691b00f37129363914770447502803.scope - libcontainer container 3cdef1a68d9f93f0815a0e141c2d380c7e691b00f37129363914770447502803. Jan 30 13:56:23.047034 systemd[1]: Started cri-containerd-6bda536a6c562c36bce1d18c6331878607db2e403e37ed2873b498a2e0d65976.scope - libcontainer container 6bda536a6c562c36bce1d18c6331878607db2e403e37ed2873b498a2e0d65976. Jan 30 13:56:23.049004 systemd[1]: Started cri-containerd-b5199a699f03da6aad479612a7439ef9dace6289f47ce91ebe6f15fc5d4743b8.scope - libcontainer container b5199a699f03da6aad479612a7439ef9dace6289f47ce91ebe6f15fc5d4743b8. Jan 30 13:56:23.089507 containerd[1541]: time="2025-01-30T13:56:23.089423678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:d2b7d53a13200fba6c90004888f6790a,Namespace:kube-system,Attempt:0,} returns sandbox id \"b5199a699f03da6aad479612a7439ef9dace6289f47ce91ebe6f15fc5d4743b8\"" Jan 30 13:56:23.093619 containerd[1541]: time="2025-01-30T13:56:23.093478683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:9b8b5886141f9311660bb6b224a0f76c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cdef1a68d9f93f0815a0e141c2d380c7e691b00f37129363914770447502803\"" Jan 30 13:56:23.095478 kubelet[2403]: E0130 13:56:23.095105 2403 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.103:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.103:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f7cf71cbae993 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-30 13:56:21.969078675 +0000 UTC m=+0.500459359,LastTimestamp:2025-01-30 13:56:21.969078675 +0000 UTC m=+0.500459359,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 30 13:56:23.100066 containerd[1541]: time="2025-01-30T13:56:23.099250853Z" level=info msg="CreateContainer within sandbox \"3cdef1a68d9f93f0815a0e141c2d380c7e691b00f37129363914770447502803\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 30 13:56:23.101459 containerd[1541]: time="2025-01-30T13:56:23.101205471Z" level=info msg="CreateContainer within sandbox \"b5199a699f03da6aad479612a7439ef9dace6289f47ce91ebe6f15fc5d4743b8\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 30 13:56:23.111294 containerd[1541]: time="2025-01-30T13:56:23.111269375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:4b186e12ac9f083392bb0d1970b49be4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bda536a6c562c36bce1d18c6331878607db2e403e37ed2873b498a2e0d65976\"" Jan 30 13:56:23.115084 containerd[1541]: time="2025-01-30T13:56:23.115052875Z" level=info msg="CreateContainer within sandbox \"6bda536a6c562c36bce1d18c6331878607db2e403e37ed2873b498a2e0d65976\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 30 13:56:23.116074 containerd[1541]: time="2025-01-30T13:56:23.116025848Z" level=info msg="CreateContainer within sandbox \"b5199a699f03da6aad479612a7439ef9dace6289f47ce91ebe6f15fc5d4743b8\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9c079c373eb47af13cc22b24f65e6420e4ebf9adf2277b865c1386b233624461\"" Jan 30 13:56:23.116427 containerd[1541]: time="2025-01-30T13:56:23.116400477Z" level=info msg="StartContainer for \"9c079c373eb47af13cc22b24f65e6420e4ebf9adf2277b865c1386b233624461\"" Jan 30 13:56:23.118961 containerd[1541]: time="2025-01-30T13:56:23.118776179Z" level=info msg="CreateContainer within sandbox \"3cdef1a68d9f93f0815a0e141c2d380c7e691b00f37129363914770447502803\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"4ad0fb82794df1c3590b62d768b8acc1a97028f70d0ef967e51f9a9523311bbf\"" Jan 30 13:56:23.119130 containerd[1541]: time="2025-01-30T13:56:23.119116289Z" level=info msg="StartContainer for \"4ad0fb82794df1c3590b62d768b8acc1a97028f70d0ef967e51f9a9523311bbf\"" Jan 30 13:56:23.126342 containerd[1541]: time="2025-01-30T13:56:23.126071120Z" level=info msg="CreateContainer within sandbox \"6bda536a6c562c36bce1d18c6331878607db2e403e37ed2873b498a2e0d65976\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"44037036c26c802a411d539cd3277feb890f527b1e3e4e19c09ca6610c59ebf1\"" Jan 30 13:56:23.128148 containerd[1541]: time="2025-01-30T13:56:23.127502111Z" level=info msg="StartContainer for \"44037036c26c802a411d539cd3277feb890f527b1e3e4e19c09ca6610c59ebf1\"" Jan 30 13:56:23.144366 systemd[1]: Started cri-containerd-9c079c373eb47af13cc22b24f65e6420e4ebf9adf2277b865c1386b233624461.scope - libcontainer container 9c079c373eb47af13cc22b24f65e6420e4ebf9adf2277b865c1386b233624461. Jan 30 13:56:23.147712 systemd[1]: Started cri-containerd-4ad0fb82794df1c3590b62d768b8acc1a97028f70d0ef967e51f9a9523311bbf.scope - libcontainer container 4ad0fb82794df1c3590b62d768b8acc1a97028f70d0ef967e51f9a9523311bbf. Jan 30 13:56:23.159424 systemd[1]: Started cri-containerd-44037036c26c802a411d539cd3277feb890f527b1e3e4e19c09ca6610c59ebf1.scope - libcontainer container 44037036c26c802a411d539cd3277feb890f527b1e3e4e19c09ca6610c59ebf1. Jan 30 13:56:23.167491 kubelet[2403]: W0130 13:56:23.167452 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:23.167491 kubelet[2403]: E0130 13:56:23.167494 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://139.178.70.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:23.196230 containerd[1541]: time="2025-01-30T13:56:23.196071049Z" level=info msg="StartContainer for \"9c079c373eb47af13cc22b24f65e6420e4ebf9adf2277b865c1386b233624461\" returns successfully" Jan 30 13:56:23.198593 containerd[1541]: time="2025-01-30T13:56:23.198553068Z" level=info msg="StartContainer for \"4ad0fb82794df1c3590b62d768b8acc1a97028f70d0ef967e51f9a9523311bbf\" returns successfully" Jan 30 13:56:23.211292 containerd[1541]: time="2025-01-30T13:56:23.211270725Z" level=info msg="StartContainer for \"44037036c26c802a411d539cd3277feb890f527b1e3e4e19c09ca6610c59ebf1\" returns successfully" Jan 30 13:56:23.313306 kubelet[2403]: W0130 13:56:23.313189 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:23.313306 kubelet[2403]: E0130 13:56:23.313248 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://139.178.70.103:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:23.384798 kubelet[2403]: E0130 13:56:23.384764 2403 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.103:6443: connect: connection refused" interval="1.6s" Jan 30 13:56:23.485528 kubelet[2403]: I0130 13:56:23.485297 2403 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:56:23.485528 kubelet[2403]: E0130 13:56:23.485468 2403 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://139.178.70.103:6443/api/v1/nodes\": dial tcp 139.178.70.103:6443: connect: connection refused" node="localhost" Jan 30 13:56:23.531033 kubelet[2403]: W0130 13:56:23.530976 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:23.531033 kubelet[2403]: E0130 13:56:23.531016 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://139.178.70.103:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:23.544423 kubelet[2403]: W0130 13:56:23.544373 2403 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:23.544423 kubelet[2403]: E0130 13:56:23.544410 2403 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://139.178.70.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:24.073117 kubelet[2403]: E0130 13:56:24.073076 2403 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://139.178.70.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 139.178.70.103:6443: connect: connection refused Jan 30 13:56:25.086684 kubelet[2403]: I0130 13:56:25.086377 2403 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:56:25.335464 kubelet[2403]: E0130 13:56:25.335428 2403 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 30 13:56:25.552551 kubelet[2403]: I0130 13:56:25.552523 2403 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:56:25.557384 kubelet[2403]: E0130 13:56:25.557350 2403 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:56:25.658256 kubelet[2403]: E0130 13:56:25.658225 2403 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:56:25.759344 kubelet[2403]: E0130 13:56:25.759318 2403 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 30 13:56:25.969271 kubelet[2403]: I0130 13:56:25.969252 2403 apiserver.go:52] "Watching apiserver" Jan 30 13:56:25.982904 kubelet[2403]: I0130 13:56:25.982883 2403 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:56:26.991554 systemd[1]: Reloading requested from client PID 2676 ('systemctl') (unit session-9.scope)... Jan 30 13:56:26.991563 systemd[1]: Reloading... Jan 30 13:56:27.056282 zram_generator::config[2726]: No configuration found. Jan 30 13:56:27.104037 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 30 13:56:27.119168 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 30 13:56:27.178402 systemd[1]: Reloading finished in 186 ms. Jan 30 13:56:27.204109 kubelet[2403]: I0130 13:56:27.204049 2403 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:56:27.204278 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:56:27.210366 systemd[1]: kubelet.service: Deactivated successfully. Jan 30 13:56:27.210490 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:56:27.215535 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 30 13:56:27.450659 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 30 13:56:27.453530 (kubelet)[2781]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 30 13:56:27.499116 kubelet[2781]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:56:27.499116 kubelet[2781]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 30 13:56:27.499116 kubelet[2781]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 30 13:56:27.501659 kubelet[2781]: I0130 13:56:27.501634 2781 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 30 13:56:27.504388 kubelet[2781]: I0130 13:56:27.504279 2781 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jan 30 13:56:27.504388 kubelet[2781]: I0130 13:56:27.504290 2781 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 30 13:56:27.504388 kubelet[2781]: I0130 13:56:27.504385 2781 server.go:927] "Client rotation is on, will bootstrap in background" Jan 30 13:56:27.505126 kubelet[2781]: I0130 13:56:27.505114 2781 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 30 13:56:27.505971 kubelet[2781]: I0130 13:56:27.505747 2781 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 30 13:56:27.511011 kubelet[2781]: I0130 13:56:27.511000 2781 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 30 13:56:27.511436 kubelet[2781]: I0130 13:56:27.511207 2781 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 30 13:56:27.511436 kubelet[2781]: I0130 13:56:27.511238 2781 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jan 30 13:56:27.511436 kubelet[2781]: I0130 13:56:27.511335 2781 topology_manager.go:138] "Creating topology manager with none policy" Jan 30 13:56:27.511436 kubelet[2781]: I0130 13:56:27.511342 2781 container_manager_linux.go:301] "Creating device plugin manager" Jan 30 13:56:27.512114 kubelet[2781]: I0130 13:56:27.511932 2781 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:56:27.512114 kubelet[2781]: I0130 13:56:27.511996 2781 kubelet.go:400] "Attempting to sync node with API server" Jan 30 13:56:27.512114 kubelet[2781]: I0130 13:56:27.512004 2781 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 30 13:56:27.512114 kubelet[2781]: I0130 13:56:27.512016 2781 kubelet.go:312] "Adding apiserver pod source" Jan 30 13:56:27.512114 kubelet[2781]: I0130 13:56:27.512024 2781 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 30 13:56:27.513898 kubelet[2781]: I0130 13:56:27.513876 2781 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 30 13:56:27.514010 kubelet[2781]: I0130 13:56:27.513990 2781 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 30 13:56:27.517427 kubelet[2781]: I0130 13:56:27.516729 2781 server.go:1264] "Started kubelet" Jan 30 13:56:27.517574 kubelet[2781]: I0130 13:56:27.517566 2781 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 30 13:56:27.526229 kubelet[2781]: I0130 13:56:27.524907 2781 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 30 13:56:27.527608 kubelet[2781]: I0130 13:56:27.527523 2781 server.go:455] "Adding debug handlers to kubelet server" Jan 30 13:56:27.528258 kubelet[2781]: I0130 13:56:27.525764 2781 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 30 13:56:27.528258 kubelet[2781]: I0130 13:56:27.528204 2781 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 30 13:56:27.528334 kubelet[2781]: I0130 13:56:27.528323 2781 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 30 13:56:27.529179 kubelet[2781]: I0130 13:56:27.529124 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 30 13:56:27.529988 kubelet[2781]: I0130 13:56:27.525757 2781 volume_manager.go:291] "Starting Kubelet Volume Manager" Jan 30 13:56:27.530032 kubelet[2781]: I0130 13:56:27.530023 2781 reconciler.go:26] "Reconciler: start to sync state" Jan 30 13:56:27.531225 kubelet[2781]: I0130 13:56:27.531063 2781 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 30 13:56:27.531225 kubelet[2781]: I0130 13:56:27.531078 2781 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 30 13:56:27.531225 kubelet[2781]: I0130 13:56:27.531088 2781 kubelet.go:2337] "Starting kubelet main sync loop" Jan 30 13:56:27.531225 kubelet[2781]: E0130 13:56:27.531108 2781 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 30 13:56:27.533575 kubelet[2781]: I0130 13:56:27.533455 2781 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 30 13:56:27.535168 kubelet[2781]: I0130 13:56:27.535096 2781 factory.go:221] Registration of the containerd container factory successfully Jan 30 13:56:27.535168 kubelet[2781]: I0130 13:56:27.535106 2781 factory.go:221] Registration of the systemd container factory successfully Jan 30 13:56:27.547898 kubelet[2781]: E0130 13:56:27.547312 2781 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 30 13:56:27.562432 kubelet[2781]: I0130 13:56:27.562413 2781 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 30 13:56:27.562432 kubelet[2781]: I0130 13:56:27.562425 2781 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 30 13:56:27.562531 kubelet[2781]: I0130 13:56:27.562455 2781 state_mem.go:36] "Initialized new in-memory state store" Jan 30 13:56:27.562551 kubelet[2781]: I0130 13:56:27.562539 2781 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 30 13:56:27.562568 kubelet[2781]: I0130 13:56:27.562545 2781 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 30 13:56:27.562568 kubelet[2781]: I0130 13:56:27.562556 2781 policy_none.go:49] "None policy: Start" Jan 30 13:56:27.563069 kubelet[2781]: I0130 13:56:27.563057 2781 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 30 13:56:27.563095 kubelet[2781]: I0130 13:56:27.563071 2781 state_mem.go:35] "Initializing new in-memory state store" Jan 30 13:56:27.563147 kubelet[2781]: I0130 13:56:27.563137 2781 state_mem.go:75] "Updated machine memory state" Jan 30 13:56:27.565433 kubelet[2781]: I0130 13:56:27.565418 2781 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 30 13:56:27.565528 kubelet[2781]: I0130 13:56:27.565503 2781 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 30 13:56:27.565574 kubelet[2781]: I0130 13:56:27.565558 2781 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 30 13:56:27.628573 kubelet[2781]: I0130 13:56:27.628554 2781 kubelet_node_status.go:73] "Attempting to register node" node="localhost" Jan 30 13:56:27.632176 kubelet[2781]: I0130 13:56:27.631778 2781 topology_manager.go:215] "Topology Admit Handler" podUID="9b8b5886141f9311660bb6b224a0f76c" podNamespace="kube-system" podName="kube-controller-manager-localhost" Jan 30 13:56:27.632176 kubelet[2781]: I0130 13:56:27.631849 2781 topology_manager.go:215] "Topology Admit Handler" podUID="4b186e12ac9f083392bb0d1970b49be4" podNamespace="kube-system" podName="kube-scheduler-localhost" Jan 30 13:56:27.632176 kubelet[2781]: I0130 13:56:27.631886 2781 topology_manager.go:215] "Topology Admit Handler" podUID="d2b7d53a13200fba6c90004888f6790a" podNamespace="kube-system" podName="kube-apiserver-localhost" Jan 30 13:56:27.632470 kubelet[2781]: I0130 13:56:27.632414 2781 kubelet_node_status.go:112] "Node was previously registered" node="localhost" Jan 30 13:56:27.632470 kubelet[2781]: I0130 13:56:27.632451 2781 kubelet_node_status.go:76] "Successfully registered node" node="localhost" Jan 30 13:56:27.731911 kubelet[2781]: I0130 13:56:27.731043 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:27.731911 kubelet[2781]: I0130 13:56:27.731068 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:27.731911 kubelet[2781]: I0130 13:56:27.731081 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d2b7d53a13200fba6c90004888f6790a-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2b7d53a13200fba6c90004888f6790a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:56:27.731911 kubelet[2781]: I0130 13:56:27.731090 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:27.731911 kubelet[2781]: I0130 13:56:27.731099 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:27.732107 kubelet[2781]: I0130 13:56:27.731107 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9b8b5886141f9311660bb6b224a0f76c-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"9b8b5886141f9311660bb6b224a0f76c\") " pod="kube-system/kube-controller-manager-localhost" Jan 30 13:56:27.732107 kubelet[2781]: I0130 13:56:27.731116 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/4b186e12ac9f083392bb0d1970b49be4-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"4b186e12ac9f083392bb0d1970b49be4\") " pod="kube-system/kube-scheduler-localhost" Jan 30 13:56:27.732107 kubelet[2781]: I0130 13:56:27.731125 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d2b7d53a13200fba6c90004888f6790a-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"d2b7d53a13200fba6c90004888f6790a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:56:27.732107 kubelet[2781]: I0130 13:56:27.731135 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d2b7d53a13200fba6c90004888f6790a-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"d2b7d53a13200fba6c90004888f6790a\") " pod="kube-system/kube-apiserver-localhost" Jan 30 13:56:28.515417 kubelet[2781]: I0130 13:56:28.515395 2781 apiserver.go:52] "Watching apiserver" Jan 30 13:56:28.528918 kubelet[2781]: I0130 13:56:28.528897 2781 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 30 13:56:28.568745 kubelet[2781]: I0130 13:56:28.568388 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=1.568361752 podStartE2EDuration="1.568361752s" podCreationTimestamp="2025-01-30 13:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:28.562653109 +0000 UTC m=+1.098263276" watchObservedRunningTime="2025-01-30 13:56:28.568361752 +0000 UTC m=+1.103971911" Jan 30 13:56:28.568745 kubelet[2781]: E0130 13:56:28.568498 2781 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 30 13:56:28.585650 kubelet[2781]: I0130 13:56:28.585619 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.585607363 podStartE2EDuration="1.585607363s" podCreationTimestamp="2025-01-30 13:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:28.576289067 +0000 UTC m=+1.111899232" watchObservedRunningTime="2025-01-30 13:56:28.585607363 +0000 UTC m=+1.121217530" Jan 30 13:56:28.598071 kubelet[2781]: I0130 13:56:28.597998 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=1.597978888 podStartE2EDuration="1.597978888s" podCreationTimestamp="2025-01-30 13:56:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:28.586785141 +0000 UTC m=+1.122395309" watchObservedRunningTime="2025-01-30 13:56:28.597978888 +0000 UTC m=+1.133589049" Jan 30 13:56:32.078471 sudo[1852]: pam_unix(sudo:session): session closed for user root Jan 30 13:56:32.086707 sshd[1849]: pam_unix(sshd:session): session closed for user core Jan 30 13:56:32.089245 systemd[1]: sshd@6-139.178.70.103:22-139.178.68.195:41042.service: Deactivated successfully. Jan 30 13:56:32.089308 systemd-logind[1518]: Session 9 logged out. Waiting for processes to exit. Jan 30 13:56:32.091010 systemd[1]: session-9.scope: Deactivated successfully. Jan 30 13:56:32.091115 systemd[1]: session-9.scope: Consumed 3.125s CPU time, 189.4M memory peak, 0B memory swap peak. Jan 30 13:56:32.091913 systemd-logind[1518]: Removed session 9. Jan 30 13:56:42.839711 kubelet[2781]: I0130 13:56:42.839665 2781 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 30 13:56:42.848291 containerd[1541]: time="2025-01-30T13:56:42.847545772Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 30 13:56:42.848480 kubelet[2781]: I0130 13:56:42.847712 2781 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 30 13:56:43.459712 kubelet[2781]: I0130 13:56:43.459664 2781 topology_manager.go:215] "Topology Admit Handler" podUID="515b3b3e-6d57-4851-b824-b07e3dcdfe40" podNamespace="kube-system" podName="kube-proxy-2bnxl" Jan 30 13:56:43.472860 systemd[1]: Created slice kubepods-besteffort-pod515b3b3e_6d57_4851_b824_b07e3dcdfe40.slice - libcontainer container kubepods-besteffort-pod515b3b3e_6d57_4851_b824_b07e3dcdfe40.slice. Jan 30 13:56:43.525559 kubelet[2781]: I0130 13:56:43.525534 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/515b3b3e-6d57-4851-b824-b07e3dcdfe40-xtables-lock\") pod \"kube-proxy-2bnxl\" (UID: \"515b3b3e-6d57-4851-b824-b07e3dcdfe40\") " pod="kube-system/kube-proxy-2bnxl" Jan 30 13:56:43.525755 kubelet[2781]: I0130 13:56:43.525731 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7gxq\" (UniqueName: \"kubernetes.io/projected/515b3b3e-6d57-4851-b824-b07e3dcdfe40-kube-api-access-f7gxq\") pod \"kube-proxy-2bnxl\" (UID: \"515b3b3e-6d57-4851-b824-b07e3dcdfe40\") " pod="kube-system/kube-proxy-2bnxl" Jan 30 13:56:43.525863 kubelet[2781]: I0130 13:56:43.525852 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/515b3b3e-6d57-4851-b824-b07e3dcdfe40-kube-proxy\") pod \"kube-proxy-2bnxl\" (UID: \"515b3b3e-6d57-4851-b824-b07e3dcdfe40\") " pod="kube-system/kube-proxy-2bnxl" Jan 30 13:56:43.525954 kubelet[2781]: I0130 13:56:43.525945 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/515b3b3e-6d57-4851-b824-b07e3dcdfe40-lib-modules\") pod \"kube-proxy-2bnxl\" (UID: \"515b3b3e-6d57-4851-b824-b07e3dcdfe40\") " pod="kube-system/kube-proxy-2bnxl" Jan 30 13:56:43.621599 kubelet[2781]: I0130 13:56:43.621568 2781 topology_manager.go:215] "Topology Admit Handler" podUID="01cc9b7f-d53f-49d7-86ee-b160894669d3" podNamespace="tigera-operator" podName="tigera-operator-7bc55997bb-z9pbf" Jan 30 13:56:43.628458 systemd[1]: Created slice kubepods-besteffort-pod01cc9b7f_d53f_49d7_86ee_b160894669d3.slice - libcontainer container kubepods-besteffort-pod01cc9b7f_d53f_49d7_86ee_b160894669d3.slice. Jan 30 13:56:43.727601 kubelet[2781]: I0130 13:56:43.727495 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fwb5r\" (UniqueName: \"kubernetes.io/projected/01cc9b7f-d53f-49d7-86ee-b160894669d3-kube-api-access-fwb5r\") pod \"tigera-operator-7bc55997bb-z9pbf\" (UID: \"01cc9b7f-d53f-49d7-86ee-b160894669d3\") " pod="tigera-operator/tigera-operator-7bc55997bb-z9pbf" Jan 30 13:56:43.727601 kubelet[2781]: I0130 13:56:43.727525 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/01cc9b7f-d53f-49d7-86ee-b160894669d3-var-lib-calico\") pod \"tigera-operator-7bc55997bb-z9pbf\" (UID: \"01cc9b7f-d53f-49d7-86ee-b160894669d3\") " pod="tigera-operator/tigera-operator-7bc55997bb-z9pbf" Jan 30 13:56:43.785329 containerd[1541]: time="2025-01-30T13:56:43.785202449Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bnxl,Uid:515b3b3e-6d57-4851-b824-b07e3dcdfe40,Namespace:kube-system,Attempt:0,}" Jan 30 13:56:43.922776 containerd[1541]: time="2025-01-30T13:56:43.922681652Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:43.922776 containerd[1541]: time="2025-01-30T13:56:43.922737706Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:43.923403 containerd[1541]: time="2025-01-30T13:56:43.922769847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:43.923403 containerd[1541]: time="2025-01-30T13:56:43.922846633Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:43.939284 containerd[1541]: time="2025-01-30T13:56:43.939261986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-z9pbf,Uid:01cc9b7f-d53f-49d7-86ee-b160894669d3,Namespace:tigera-operator,Attempt:0,}" Jan 30 13:56:43.940320 systemd[1]: Started cri-containerd-3d3ed9b879179cee4d2e942623782eafe8f208282a6ac9a03b3bd79b7a2897ac.scope - libcontainer container 3d3ed9b879179cee4d2e942623782eafe8f208282a6ac9a03b3bd79b7a2897ac. Jan 30 13:56:43.955103 containerd[1541]: time="2025-01-30T13:56:43.955072101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2bnxl,Uid:515b3b3e-6d57-4851-b824-b07e3dcdfe40,Namespace:kube-system,Attempt:0,} returns sandbox id \"3d3ed9b879179cee4d2e942623782eafe8f208282a6ac9a03b3bd79b7a2897ac\"" Jan 30 13:56:43.957841 containerd[1541]: time="2025-01-30T13:56:43.957815224Z" level=info msg="CreateContainer within sandbox \"3d3ed9b879179cee4d2e942623782eafe8f208282a6ac9a03b3bd79b7a2897ac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 30 13:56:43.994918 containerd[1541]: time="2025-01-30T13:56:43.991997900Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:43.994918 containerd[1541]: time="2025-01-30T13:56:43.992056085Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:43.994918 containerd[1541]: time="2025-01-30T13:56:43.992068267Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:43.994918 containerd[1541]: time="2025-01-30T13:56:43.992398227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:43.997924 containerd[1541]: time="2025-01-30T13:56:43.997900287Z" level=info msg="CreateContainer within sandbox \"3d3ed9b879179cee4d2e942623782eafe8f208282a6ac9a03b3bd79b7a2897ac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"9eeb404e155b8c75aa96d01b206faec0ba2c85f535cd42a1764cd004831c9de6\"" Jan 30 13:56:43.998612 containerd[1541]: time="2025-01-30T13:56:43.998598573Z" level=info msg="StartContainer for \"9eeb404e155b8c75aa96d01b206faec0ba2c85f535cd42a1764cd004831c9de6\"" Jan 30 13:56:44.010365 systemd[1]: Started cri-containerd-484ce7b461a60ef08a0b9689ee98a37e2876d378e066cbe813441fe3593d9d7b.scope - libcontainer container 484ce7b461a60ef08a0b9689ee98a37e2876d378e066cbe813441fe3593d9d7b. Jan 30 13:56:44.021468 systemd[1]: Started cri-containerd-9eeb404e155b8c75aa96d01b206faec0ba2c85f535cd42a1764cd004831c9de6.scope - libcontainer container 9eeb404e155b8c75aa96d01b206faec0ba2c85f535cd42a1764cd004831c9de6. Jan 30 13:56:44.039761 containerd[1541]: time="2025-01-30T13:56:44.039692975Z" level=info msg="StartContainer for \"9eeb404e155b8c75aa96d01b206faec0ba2c85f535cd42a1764cd004831c9de6\" returns successfully" Jan 30 13:56:44.053455 containerd[1541]: time="2025-01-30T13:56:44.053434226Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7bc55997bb-z9pbf,Uid:01cc9b7f-d53f-49d7-86ee-b160894669d3,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"484ce7b461a60ef08a0b9689ee98a37e2876d378e066cbe813441fe3593d9d7b\"" Jan 30 13:56:44.055309 containerd[1541]: time="2025-01-30T13:56:44.055197807Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 30 13:56:44.588659 kubelet[2781]: I0130 13:56:44.588585 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-2bnxl" podStartSLOduration=1.588563834 podStartE2EDuration="1.588563834s" podCreationTimestamp="2025-01-30 13:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:56:44.588033294 +0000 UTC m=+17.123643452" watchObservedRunningTime="2025-01-30 13:56:44.588563834 +0000 UTC m=+17.124173994" Jan 30 13:56:45.605086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount553341393.mount: Deactivated successfully. Jan 30 13:56:46.024834 containerd[1541]: time="2025-01-30T13:56:46.024807242Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.025313 containerd[1541]: time="2025-01-30T13:56:46.025250851Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 30 13:56:46.025464 containerd[1541]: time="2025-01-30T13:56:46.025450989Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.026760 containerd[1541]: time="2025-01-30T13:56:46.026729212Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:46.027487 containerd[1541]: time="2025-01-30T13:56:46.027180985Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.971913867s" Jan 30 13:56:46.027487 containerd[1541]: time="2025-01-30T13:56:46.027203940Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 30 13:56:46.031155 containerd[1541]: time="2025-01-30T13:56:46.031140174Z" level=info msg="CreateContainer within sandbox \"484ce7b461a60ef08a0b9689ee98a37e2876d378e066cbe813441fe3593d9d7b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 30 13:56:46.036978 containerd[1541]: time="2025-01-30T13:56:46.036954830Z" level=info msg="CreateContainer within sandbox \"484ce7b461a60ef08a0b9689ee98a37e2876d378e066cbe813441fe3593d9d7b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1bc57d5273bbe7dad74e7852b26c8fdfcd6412279e4dbdeee19a282c8b5be06c\"" Jan 30 13:56:46.037590 containerd[1541]: time="2025-01-30T13:56:46.037573298Z" level=info msg="StartContainer for \"1bc57d5273bbe7dad74e7852b26c8fdfcd6412279e4dbdeee19a282c8b5be06c\"" Jan 30 13:56:46.052810 systemd[1]: run-containerd-runc-k8s.io-1bc57d5273bbe7dad74e7852b26c8fdfcd6412279e4dbdeee19a282c8b5be06c-runc.4glURj.mount: Deactivated successfully. Jan 30 13:56:46.059291 systemd[1]: Started cri-containerd-1bc57d5273bbe7dad74e7852b26c8fdfcd6412279e4dbdeee19a282c8b5be06c.scope - libcontainer container 1bc57d5273bbe7dad74e7852b26c8fdfcd6412279e4dbdeee19a282c8b5be06c. Jan 30 13:56:46.071945 containerd[1541]: time="2025-01-30T13:56:46.071914159Z" level=info msg="StartContainer for \"1bc57d5273bbe7dad74e7852b26c8fdfcd6412279e4dbdeee19a282c8b5be06c\" returns successfully" Jan 30 13:56:47.565336 kubelet[2781]: I0130 13:56:47.565298 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7bc55997bb-z9pbf" podStartSLOduration=2.5891822380000002 podStartE2EDuration="4.565285086s" podCreationTimestamp="2025-01-30 13:56:43 +0000 UTC" firstStartedPulling="2025-01-30 13:56:44.05416659 +0000 UTC m=+16.589776747" lastFinishedPulling="2025-01-30 13:56:46.030269438 +0000 UTC m=+18.565879595" observedRunningTime="2025-01-30 13:56:46.585529671 +0000 UTC m=+19.121139847" watchObservedRunningTime="2025-01-30 13:56:47.565285086 +0000 UTC m=+20.100895249" Jan 30 13:56:49.020135 kubelet[2781]: I0130 13:56:49.019779 2781 topology_manager.go:215] "Topology Admit Handler" podUID="ecd4f075-4727-480c-95e0-1f433844e122" podNamespace="calico-system" podName="calico-typha-cf55bc847-gf4v7" Jan 30 13:56:49.086646 systemd[1]: Created slice kubepods-besteffort-podecd4f075_4727_480c_95e0_1f433844e122.slice - libcontainer container kubepods-besteffort-podecd4f075_4727_480c_95e0_1f433844e122.slice. Jan 30 13:56:49.109798 kubelet[2781]: I0130 13:56:49.109772 2781 topology_manager.go:215] "Topology Admit Handler" podUID="ad6e11ef-76a6-4bde-81c8-7ea07ca1653b" podNamespace="calico-system" podName="calico-node-fvx8x" Jan 30 13:56:49.118871 systemd[1]: Created slice kubepods-besteffort-podad6e11ef_76a6_4bde_81c8_7ea07ca1653b.slice - libcontainer container kubepods-besteffort-podad6e11ef_76a6_4bde_81c8_7ea07ca1653b.slice. Jan 30 13:56:49.165552 kubelet[2781]: I0130 13:56:49.165524 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ecd4f075-4727-480c-95e0-1f433844e122-typha-certs\") pod \"calico-typha-cf55bc847-gf4v7\" (UID: \"ecd4f075-4727-480c-95e0-1f433844e122\") " pod="calico-system/calico-typha-cf55bc847-gf4v7" Jan 30 13:56:49.165552 kubelet[2781]: I0130 13:56:49.165550 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd4f075-4727-480c-95e0-1f433844e122-tigera-ca-bundle\") pod \"calico-typha-cf55bc847-gf4v7\" (UID: \"ecd4f075-4727-480c-95e0-1f433844e122\") " pod="calico-system/calico-typha-cf55bc847-gf4v7" Jan 30 13:56:49.165666 kubelet[2781]: I0130 13:56:49.165563 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pp4d4\" (UniqueName: \"kubernetes.io/projected/ecd4f075-4727-480c-95e0-1f433844e122-kube-api-access-pp4d4\") pod \"calico-typha-cf55bc847-gf4v7\" (UID: \"ecd4f075-4727-480c-95e0-1f433844e122\") " pod="calico-system/calico-typha-cf55bc847-gf4v7" Jan 30 13:56:49.186230 kubelet[2781]: I0130 13:56:49.185614 2781 topology_manager.go:215] "Topology Admit Handler" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" podNamespace="calico-system" podName="csi-node-driver-k9nfh" Jan 30 13:56:49.186230 kubelet[2781]: E0130 13:56:49.185789 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9nfh" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" Jan 30 13:56:49.265741 kubelet[2781]: I0130 13:56:49.265715 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-cni-bin-dir\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.265865 kubelet[2781]: I0130 13:56:49.265857 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-chjfp\" (UniqueName: \"kubernetes.io/projected/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-kube-api-access-chjfp\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.266314 kubelet[2781]: I0130 13:56:49.265904 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-node-certs\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.266314 kubelet[2781]: I0130 13:56:49.265927 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-lib-modules\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.266314 kubelet[2781]: I0130 13:56:49.265937 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-policysync\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.266314 kubelet[2781]: I0130 13:56:49.265948 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-var-run-calico\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.266314 kubelet[2781]: I0130 13:56:49.265957 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-var-lib-calico\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.271290 kubelet[2781]: I0130 13:56:49.265966 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-xtables-lock\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.271290 kubelet[2781]: I0130 13:56:49.265975 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-flexvol-driver-host\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.271290 kubelet[2781]: I0130 13:56:49.265992 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-cni-net-dir\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.271290 kubelet[2781]: I0130 13:56:49.266025 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-tigera-ca-bundle\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.271290 kubelet[2781]: I0130 13:56:49.266035 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/ad6e11ef-76a6-4bde-81c8-7ea07ca1653b-cni-log-dir\") pod \"calico-node-fvx8x\" (UID: \"ad6e11ef-76a6-4bde-81c8-7ea07ca1653b\") " pod="calico-system/calico-node-fvx8x" Jan 30 13:56:49.366734 kubelet[2781]: I0130 13:56:49.366431 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m72zn\" (UniqueName: \"kubernetes.io/projected/7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5-kube-api-access-m72zn\") pod \"csi-node-driver-k9nfh\" (UID: \"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5\") " pod="calico-system/csi-node-driver-k9nfh" Jan 30 13:56:49.366734 kubelet[2781]: I0130 13:56:49.366486 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5-kubelet-dir\") pod \"csi-node-driver-k9nfh\" (UID: \"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5\") " pod="calico-system/csi-node-driver-k9nfh" Jan 30 13:56:49.366734 kubelet[2781]: I0130 13:56:49.366497 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5-socket-dir\") pod \"csi-node-driver-k9nfh\" (UID: \"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5\") " pod="calico-system/csi-node-driver-k9nfh" Jan 30 13:56:49.366734 kubelet[2781]: I0130 13:56:49.366507 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5-registration-dir\") pod \"csi-node-driver-k9nfh\" (UID: \"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5\") " pod="calico-system/csi-node-driver-k9nfh" Jan 30 13:56:49.366734 kubelet[2781]: I0130 13:56:49.366536 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5-varrun\") pod \"csi-node-driver-k9nfh\" (UID: \"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5\") " pod="calico-system/csi-node-driver-k9nfh" Jan 30 13:56:49.377027 kubelet[2781]: E0130 13:56:49.376869 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.377027 kubelet[2781]: W0130 13:56:49.376882 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.378273 kubelet[2781]: E0130 13:56:49.377484 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.413769 containerd[1541]: time="2025-01-30T13:56:49.413452681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf55bc847-gf4v7,Uid:ecd4f075-4727-480c-95e0-1f433844e122,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:49.424806 containerd[1541]: time="2025-01-30T13:56:49.423514414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fvx8x,Uid:ad6e11ef-76a6-4bde-81c8-7ea07ca1653b,Namespace:calico-system,Attempt:0,}" Jan 30 13:56:49.438191 containerd[1541]: time="2025-01-30T13:56:49.437878445Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:49.438191 containerd[1541]: time="2025-01-30T13:56:49.437929663Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:49.438191 containerd[1541]: time="2025-01-30T13:56:49.437940270Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:49.438191 containerd[1541]: time="2025-01-30T13:56:49.438007015Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:49.452327 containerd[1541]: time="2025-01-30T13:56:49.451125438Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:56:49.452327 containerd[1541]: time="2025-01-30T13:56:49.451155282Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:56:49.452327 containerd[1541]: time="2025-01-30T13:56:49.451162423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:49.452327 containerd[1541]: time="2025-01-30T13:56:49.451278854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:56:49.466963 kubelet[2781]: E0130 13:56:49.466950 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.467049 kubelet[2781]: W0130 13:56:49.467041 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.470371 kubelet[2781]: E0130 13:56:49.467224 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.470371 kubelet[2781]: E0130 13:56:49.467430 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.470371 kubelet[2781]: W0130 13:56:49.467435 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.470371 kubelet[2781]: E0130 13:56:49.467443 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.470371 kubelet[2781]: E0130 13:56:49.467561 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.470371 kubelet[2781]: W0130 13:56:49.467570 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.470371 kubelet[2781]: E0130 13:56:49.467601 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.470371 kubelet[2781]: E0130 13:56:49.467724 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.470371 kubelet[2781]: W0130 13:56:49.467729 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.470371 kubelet[2781]: E0130 13:56:49.467758 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.473697 kubelet[2781]: E0130 13:56:49.467861 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.473697 kubelet[2781]: W0130 13:56:49.467866 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.473697 kubelet[2781]: E0130 13:56:49.467880 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.473697 kubelet[2781]: E0130 13:56:49.468008 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.473697 kubelet[2781]: W0130 13:56:49.468013 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.473697 kubelet[2781]: E0130 13:56:49.468028 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.473697 kubelet[2781]: E0130 13:56:49.468142 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.473697 kubelet[2781]: W0130 13:56:49.468147 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.473697 kubelet[2781]: E0130 13:56:49.468161 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.473697 kubelet[2781]: E0130 13:56:49.468274 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.470525 systemd[1]: Started cri-containerd-0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63.scope - libcontainer container 0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63. Jan 30 13:56:49.473920 kubelet[2781]: W0130 13:56:49.468280 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.473920 kubelet[2781]: E0130 13:56:49.468290 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.473920 kubelet[2781]: E0130 13:56:49.468387 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.473920 kubelet[2781]: W0130 13:56:49.468392 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.473920 kubelet[2781]: E0130 13:56:49.468399 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.473920 kubelet[2781]: E0130 13:56:49.468487 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.473920 kubelet[2781]: W0130 13:56:49.468497 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.473920 kubelet[2781]: E0130 13:56:49.468504 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.473920 kubelet[2781]: E0130 13:56:49.468607 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.473920 kubelet[2781]: W0130 13:56:49.468613 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.474399 kubelet[2781]: E0130 13:56:49.468622 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.474399 kubelet[2781]: E0130 13:56:49.468747 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.474399 kubelet[2781]: W0130 13:56:49.468754 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.474399 kubelet[2781]: E0130 13:56:49.468761 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.474399 kubelet[2781]: E0130 13:56:49.468909 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.474399 kubelet[2781]: W0130 13:56:49.468914 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.474399 kubelet[2781]: E0130 13:56:49.468919 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.474399 kubelet[2781]: E0130 13:56:49.469047 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.474399 kubelet[2781]: W0130 13:56:49.469075 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.474399 kubelet[2781]: E0130 13:56:49.469083 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.474926 kubelet[2781]: E0130 13:56:49.469239 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.474926 kubelet[2781]: W0130 13:56:49.469244 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.474926 kubelet[2781]: E0130 13:56:49.469255 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.474926 kubelet[2781]: E0130 13:56:49.469780 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.474926 kubelet[2781]: W0130 13:56:49.469786 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.474926 kubelet[2781]: E0130 13:56:49.469797 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.474926 kubelet[2781]: E0130 13:56:49.469910 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.474926 kubelet[2781]: W0130 13:56:49.469916 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.474926 kubelet[2781]: E0130 13:56:49.469930 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.474926 kubelet[2781]: E0130 13:56:49.470061 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.475202 kubelet[2781]: W0130 13:56:49.470066 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.475202 kubelet[2781]: E0130 13:56:49.470075 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.475202 kubelet[2781]: E0130 13:56:49.470185 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.475202 kubelet[2781]: W0130 13:56:49.470191 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.475202 kubelet[2781]: E0130 13:56:49.470206 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.475202 kubelet[2781]: E0130 13:56:49.470991 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.475202 kubelet[2781]: W0130 13:56:49.470997 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.475202 kubelet[2781]: E0130 13:56:49.471050 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.475202 kubelet[2781]: E0130 13:56:49.471332 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.475202 kubelet[2781]: W0130 13:56:49.471337 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.476058 kubelet[2781]: E0130 13:56:49.471645 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.476058 kubelet[2781]: W0130 13:56:49.471650 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.476058 kubelet[2781]: E0130 13:56:49.471697 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.476058 kubelet[2781]: E0130 13:56:49.472482 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.476058 kubelet[2781]: W0130 13:56:49.472493 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.476058 kubelet[2781]: E0130 13:56:49.472501 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.476058 kubelet[2781]: E0130 13:56:49.473599 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.476058 kubelet[2781]: E0130 13:56:49.474418 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.476058 kubelet[2781]: W0130 13:56:49.474426 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.476058 kubelet[2781]: E0130 13:56:49.474439 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.476306 systemd[1]: Started cri-containerd-aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed.scope - libcontainer container aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed. Jan 30 13:56:49.478965 kubelet[2781]: E0130 13:56:49.478945 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.479894 kubelet[2781]: W0130 13:56:49.479304 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.479894 kubelet[2781]: E0130 13:56:49.479324 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.481666 kubelet[2781]: E0130 13:56:49.481648 2781 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 30 13:56:49.481666 kubelet[2781]: W0130 13:56:49.481661 2781 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 30 13:56:49.481729 kubelet[2781]: E0130 13:56:49.481673 2781 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 30 13:56:49.497099 containerd[1541]: time="2025-01-30T13:56:49.497038710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-fvx8x,Uid:ad6e11ef-76a6-4bde-81c8-7ea07ca1653b,Namespace:calico-system,Attempt:0,} returns sandbox id \"0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63\"" Jan 30 13:56:49.507819 containerd[1541]: time="2025-01-30T13:56:49.507752273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 30 13:56:49.516893 containerd[1541]: time="2025-01-30T13:56:49.516871877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf55bc847-gf4v7,Uid:ecd4f075-4727-480c-95e0-1f433844e122,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\"" Jan 30 13:56:50.533092 kubelet[2781]: E0130 13:56:50.532883 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9nfh" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" Jan 30 13:56:50.896331 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount573770756.mount: Deactivated successfully. Jan 30 13:56:51.049087 containerd[1541]: time="2025-01-30T13:56:51.048905337Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:51.053389 containerd[1541]: time="2025-01-30T13:56:51.053322042Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 30 13:56:51.055994 containerd[1541]: time="2025-01-30T13:56:51.055955056Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:51.057067 containerd[1541]: time="2025-01-30T13:56:51.057043949Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:51.057731 containerd[1541]: time="2025-01-30T13:56:51.057514904Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.549736678s" Jan 30 13:56:51.057731 containerd[1541]: time="2025-01-30T13:56:51.057541803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 30 13:56:51.058253 containerd[1541]: time="2025-01-30T13:56:51.058236116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 30 13:56:51.069869 containerd[1541]: time="2025-01-30T13:56:51.069835470Z" level=info msg="CreateContainer within sandbox \"0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 30 13:56:51.077041 containerd[1541]: time="2025-01-30T13:56:51.076883026Z" level=info msg="CreateContainer within sandbox \"0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4b1a5e072dfddc46d77109a0cfe2efdc2868056b3500b26e2604f53ae9f2a451\"" Jan 30 13:56:51.077773 containerd[1541]: time="2025-01-30T13:56:51.077551819Z" level=info msg="StartContainer for \"4b1a5e072dfddc46d77109a0cfe2efdc2868056b3500b26e2604f53ae9f2a451\"" Jan 30 13:56:51.100334 systemd[1]: Started cri-containerd-4b1a5e072dfddc46d77109a0cfe2efdc2868056b3500b26e2604f53ae9f2a451.scope - libcontainer container 4b1a5e072dfddc46d77109a0cfe2efdc2868056b3500b26e2604f53ae9f2a451. Jan 30 13:56:51.128108 systemd[1]: cri-containerd-4b1a5e072dfddc46d77109a0cfe2efdc2868056b3500b26e2604f53ae9f2a451.scope: Deactivated successfully. Jan 30 13:56:51.128824 containerd[1541]: time="2025-01-30T13:56:51.128793309Z" level=info msg="StartContainer for \"4b1a5e072dfddc46d77109a0cfe2efdc2868056b3500b26e2604f53ae9f2a451\" returns successfully" Jan 30 13:56:52.234428 containerd[1541]: time="2025-01-30T13:56:52.224327845Z" level=info msg="shim disconnected" id=4b1a5e072dfddc46d77109a0cfe2efdc2868056b3500b26e2604f53ae9f2a451 namespace=k8s.io Jan 30 13:56:52.234428 containerd[1541]: time="2025-01-30T13:56:52.234427422Z" level=warning msg="cleaning up after shim disconnected" id=4b1a5e072dfddc46d77109a0cfe2efdc2868056b3500b26e2604f53ae9f2a451 namespace=k8s.io Jan 30 13:56:52.234829 containerd[1541]: time="2025-01-30T13:56:52.234439595Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:52.532582 kubelet[2781]: E0130 13:56:52.531461 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9nfh" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" Jan 30 13:56:53.362566 containerd[1541]: time="2025-01-30T13:56:53.362073821Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.363534 containerd[1541]: time="2025-01-30T13:56:53.363494959Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 30 13:56:53.365534 containerd[1541]: time="2025-01-30T13:56:53.363987981Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.366321 containerd[1541]: time="2025-01-30T13:56:53.366278830Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:53.367599 containerd[1541]: time="2025-01-30T13:56:53.366918795Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.308662551s" Jan 30 13:56:53.367907 containerd[1541]: time="2025-01-30T13:56:53.367894008Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 30 13:56:53.373264 containerd[1541]: time="2025-01-30T13:56:53.373235803Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 30 13:56:53.385401 containerd[1541]: time="2025-01-30T13:56:53.385374949Z" level=info msg="CreateContainer within sandbox \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 30 13:56:53.408566 containerd[1541]: time="2025-01-30T13:56:53.408539743Z" level=info msg="CreateContainer within sandbox \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\"" Jan 30 13:56:53.410733 containerd[1541]: time="2025-01-30T13:56:53.410047638Z" level=info msg="StartContainer for \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\"" Jan 30 13:56:53.437326 systemd[1]: Started cri-containerd-34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739.scope - libcontainer container 34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739. Jan 30 13:56:53.481457 containerd[1541]: time="2025-01-30T13:56:53.481407152Z" level=info msg="StartContainer for \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\" returns successfully" Jan 30 13:56:53.622348 kubelet[2781]: I0130 13:56:53.616081 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cf55bc847-gf4v7" podStartSLOduration=1.765668858 podStartE2EDuration="5.616069128s" podCreationTimestamp="2025-01-30 13:56:48 +0000 UTC" firstStartedPulling="2025-01-30 13:56:49.518063611 +0000 UTC m=+22.053673768" lastFinishedPulling="2025-01-30 13:56:53.36846388 +0000 UTC m=+25.904074038" observedRunningTime="2025-01-30 13:56:53.615574886 +0000 UTC m=+26.151185052" watchObservedRunningTime="2025-01-30 13:56:53.616069128 +0000 UTC m=+26.151679294" Jan 30 13:56:54.373875 systemd[1]: run-containerd-runc-k8s.io-34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739-runc.OlD8OT.mount: Deactivated successfully. Jan 30 13:56:54.531915 kubelet[2781]: E0130 13:56:54.531879 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9nfh" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" Jan 30 13:56:54.605500 kubelet[2781]: I0130 13:56:54.605479 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:56:56.531375 kubelet[2781]: E0130 13:56:56.531350 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9nfh" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" Jan 30 13:56:57.621631 containerd[1541]: time="2025-01-30T13:56:57.621595161Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:57.622175 containerd[1541]: time="2025-01-30T13:56:57.622073620Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 30 13:56:57.623255 containerd[1541]: time="2025-01-30T13:56:57.622395074Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:57.623594 containerd[1541]: time="2025-01-30T13:56:57.623578652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:56:57.624127 containerd[1541]: time="2025-01-30T13:56:57.624110848Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.250662111s" Jan 30 13:56:57.624158 containerd[1541]: time="2025-01-30T13:56:57.624128130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 30 13:56:57.626419 containerd[1541]: time="2025-01-30T13:56:57.626386670Z" level=info msg="CreateContainer within sandbox \"0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 30 13:56:57.640370 containerd[1541]: time="2025-01-30T13:56:57.640340622Z" level=info msg="CreateContainer within sandbox \"0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761\"" Jan 30 13:56:57.645439 containerd[1541]: time="2025-01-30T13:56:57.645318998Z" level=info msg="StartContainer for \"30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761\"" Jan 30 13:56:57.687948 systemd[1]: run-containerd-runc-k8s.io-30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761-runc.2BDV48.mount: Deactivated successfully. Jan 30 13:56:57.695362 systemd[1]: Started cri-containerd-30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761.scope - libcontainer container 30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761. Jan 30 13:56:57.721232 containerd[1541]: time="2025-01-30T13:56:57.721118774Z" level=info msg="StartContainer for \"30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761\" returns successfully" Jan 30 13:56:58.531408 kubelet[2781]: E0130 13:56:58.531324 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-k9nfh" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" Jan 30 13:56:59.886790 systemd[1]: cri-containerd-30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761.scope: Deactivated successfully. Jan 30 13:56:59.908632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761-rootfs.mount: Deactivated successfully. Jan 30 13:56:59.954480 containerd[1541]: time="2025-01-30T13:56:59.926418028Z" level=info msg="shim disconnected" id=30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761 namespace=k8s.io Jan 30 13:56:59.956010 containerd[1541]: time="2025-01-30T13:56:59.954715207Z" level=warning msg="cleaning up after shim disconnected" id=30b06e2aab9deb5e5982880d95f147b7ed558a866b6fb120ea67aed2ce679761 namespace=k8s.io Jan 30 13:56:59.956010 containerd[1541]: time="2025-01-30T13:56:59.954730370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:56:59.985487 kubelet[2781]: I0130 13:56:59.984984 2781 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jan 30 13:57:00.007424 kubelet[2781]: I0130 13:57:00.007392 2781 topology_manager.go:215] "Topology Admit Handler" podUID="7dc3c7ed-e794-482c-b0b8-b5cd489cc602" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5pf6d" Jan 30 13:57:00.011802 kubelet[2781]: I0130 13:57:00.010775 2781 topology_manager.go:215] "Topology Admit Handler" podUID="f7a5bac7-0b52-4463-87d2-7adae530692a" podNamespace="calico-system" podName="calico-kube-controllers-768b4d69bb-4xhph" Jan 30 13:57:00.012223 kubelet[2781]: I0130 13:57:00.012180 2781 topology_manager.go:215] "Topology Admit Handler" podUID="dad8a909-d142-4d1f-a2c5-4c37cc87955b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-c9bkn" Jan 30 13:57:00.015045 kubelet[2781]: I0130 13:57:00.014693 2781 topology_manager.go:215] "Topology Admit Handler" podUID="87735560-600e-4fca-8313-7ffee3249515" podNamespace="calico-apiserver" podName="calico-apiserver-57d5fbb54b-5kv8d" Jan 30 13:57:00.016221 kubelet[2781]: I0130 13:57:00.015826 2781 topology_manager.go:215] "Topology Admit Handler" podUID="20fb559c-2d4a-483a-b704-96d08f23fd99" podNamespace="calico-apiserver" podName="calico-apiserver-57d5fbb54b-s4whm" Jan 30 13:57:00.039520 systemd[1]: Created slice kubepods-burstable-pod7dc3c7ed_e794_482c_b0b8_b5cd489cc602.slice - libcontainer container kubepods-burstable-pod7dc3c7ed_e794_482c_b0b8_b5cd489cc602.slice. Jan 30 13:57:00.046425 systemd[1]: Created slice kubepods-besteffort-podf7a5bac7_0b52_4463_87d2_7adae530692a.slice - libcontainer container kubepods-besteffort-podf7a5bac7_0b52_4463_87d2_7adae530692a.slice. Jan 30 13:57:00.053891 systemd[1]: Created slice kubepods-burstable-poddad8a909_d142_4d1f_a2c5_4c37cc87955b.slice - libcontainer container kubepods-burstable-poddad8a909_d142_4d1f_a2c5_4c37cc87955b.slice. Jan 30 13:57:00.060162 systemd[1]: Created slice kubepods-besteffort-pod87735560_600e_4fca_8313_7ffee3249515.slice - libcontainer container kubepods-besteffort-pod87735560_600e_4fca_8313_7ffee3249515.slice. Jan 30 13:57:00.062103 kubelet[2781]: I0130 13:57:00.062084 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw5rw\" (UniqueName: \"kubernetes.io/projected/87735560-600e-4fca-8313-7ffee3249515-kube-api-access-dw5rw\") pod \"calico-apiserver-57d5fbb54b-5kv8d\" (UID: \"87735560-600e-4fca-8313-7ffee3249515\") " pod="calico-apiserver/calico-apiserver-57d5fbb54b-5kv8d" Jan 30 13:57:00.062166 kubelet[2781]: I0130 13:57:00.062123 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spwmt\" (UniqueName: \"kubernetes.io/projected/dad8a909-d142-4d1f-a2c5-4c37cc87955b-kube-api-access-spwmt\") pod \"coredns-7db6d8ff4d-c9bkn\" (UID: \"dad8a909-d142-4d1f-a2c5-4c37cc87955b\") " pod="kube-system/coredns-7db6d8ff4d-c9bkn" Jan 30 13:57:00.062166 kubelet[2781]: I0130 13:57:00.062139 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dad8a909-d142-4d1f-a2c5-4c37cc87955b-config-volume\") pod \"coredns-7db6d8ff4d-c9bkn\" (UID: \"dad8a909-d142-4d1f-a2c5-4c37cc87955b\") " pod="kube-system/coredns-7db6d8ff4d-c9bkn" Jan 30 13:57:00.062166 kubelet[2781]: I0130 13:57:00.062152 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/87735560-600e-4fca-8313-7ffee3249515-calico-apiserver-certs\") pod \"calico-apiserver-57d5fbb54b-5kv8d\" (UID: \"87735560-600e-4fca-8313-7ffee3249515\") " pod="calico-apiserver/calico-apiserver-57d5fbb54b-5kv8d" Jan 30 13:57:00.064085 systemd[1]: Created slice kubepods-besteffort-pod20fb559c_2d4a_483a_b704_96d08f23fd99.slice - libcontainer container kubepods-besteffort-pod20fb559c_2d4a_483a_b704_96d08f23fd99.slice. Jan 30 13:57:00.163138 kubelet[2781]: I0130 13:57:00.163014 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlpxz\" (UniqueName: \"kubernetes.io/projected/f7a5bac7-0b52-4463-87d2-7adae530692a-kube-api-access-hlpxz\") pod \"calico-kube-controllers-768b4d69bb-4xhph\" (UID: \"f7a5bac7-0b52-4463-87d2-7adae530692a\") " pod="calico-system/calico-kube-controllers-768b4d69bb-4xhph" Jan 30 13:57:00.163138 kubelet[2781]: I0130 13:57:00.163047 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7dc3c7ed-e794-482c-b0b8-b5cd489cc602-config-volume\") pod \"coredns-7db6d8ff4d-5pf6d\" (UID: \"7dc3c7ed-e794-482c-b0b8-b5cd489cc602\") " pod="kube-system/coredns-7db6d8ff4d-5pf6d" Jan 30 13:57:00.163138 kubelet[2781]: I0130 13:57:00.163062 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llggk\" (UniqueName: \"kubernetes.io/projected/20fb559c-2d4a-483a-b704-96d08f23fd99-kube-api-access-llggk\") pod \"calico-apiserver-57d5fbb54b-s4whm\" (UID: \"20fb559c-2d4a-483a-b704-96d08f23fd99\") " pod="calico-apiserver/calico-apiserver-57d5fbb54b-s4whm" Jan 30 13:57:00.163138 kubelet[2781]: I0130 13:57:00.163085 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9k2xt\" (UniqueName: \"kubernetes.io/projected/7dc3c7ed-e794-482c-b0b8-b5cd489cc602-kube-api-access-9k2xt\") pod \"coredns-7db6d8ff4d-5pf6d\" (UID: \"7dc3c7ed-e794-482c-b0b8-b5cd489cc602\") " pod="kube-system/coredns-7db6d8ff4d-5pf6d" Jan 30 13:57:00.163138 kubelet[2781]: I0130 13:57:00.163098 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/20fb559c-2d4a-483a-b704-96d08f23fd99-calico-apiserver-certs\") pod \"calico-apiserver-57d5fbb54b-s4whm\" (UID: \"20fb559c-2d4a-483a-b704-96d08f23fd99\") " pod="calico-apiserver/calico-apiserver-57d5fbb54b-s4whm" Jan 30 13:57:00.163400 kubelet[2781]: I0130 13:57:00.163120 2781 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7a5bac7-0b52-4463-87d2-7adae530692a-tigera-ca-bundle\") pod \"calico-kube-controllers-768b4d69bb-4xhph\" (UID: \"f7a5bac7-0b52-4463-87d2-7adae530692a\") " pod="calico-system/calico-kube-controllers-768b4d69bb-4xhph" Jan 30 13:57:00.351962 containerd[1541]: time="2025-01-30T13:57:00.351654082Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-768b4d69bb-4xhph,Uid:f7a5bac7-0b52-4463-87d2-7adae530692a,Namespace:calico-system,Attempt:0,}" Jan 30 13:57:00.352048 containerd[1541]: time="2025-01-30T13:57:00.351987749Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pf6d,Uid:7dc3c7ed-e794-482c-b0b8-b5cd489cc602,Namespace:kube-system,Attempt:0,}" Jan 30 13:57:00.357584 containerd[1541]: time="2025-01-30T13:57:00.357459680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c9bkn,Uid:dad8a909-d142-4d1f-a2c5-4c37cc87955b,Namespace:kube-system,Attempt:0,}" Jan 30 13:57:00.363102 containerd[1541]: time="2025-01-30T13:57:00.362996862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5fbb54b-5kv8d,Uid:87735560-600e-4fca-8313-7ffee3249515,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:57:00.372277 containerd[1541]: time="2025-01-30T13:57:00.372248813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5fbb54b-s4whm,Uid:20fb559c-2d4a-483a-b704-96d08f23fd99,Namespace:calico-apiserver,Attempt:0,}" Jan 30 13:57:00.536137 systemd[1]: Created slice kubepods-besteffort-pod7bdeb187_27dc_4c7e_aa2a_c05d3d3268f5.slice - libcontainer container kubepods-besteffort-pod7bdeb187_27dc_4c7e_aa2a_c05d3d3268f5.slice. Jan 30 13:57:00.538399 containerd[1541]: time="2025-01-30T13:57:00.538369239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k9nfh,Uid:7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5,Namespace:calico-system,Attempt:0,}" Jan 30 13:57:00.616165 containerd[1541]: time="2025-01-30T13:57:00.616141500Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 30 13:57:00.884232 containerd[1541]: time="2025-01-30T13:57:00.884138101Z" level=error msg="Failed to destroy network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.884786 containerd[1541]: time="2025-01-30T13:57:00.884629607Z" level=error msg="encountered an error cleaning up failed sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.884786 containerd[1541]: time="2025-01-30T13:57:00.884662633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5fbb54b-5kv8d,Uid:87735560-600e-4fca-8313-7ffee3249515,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.885555 containerd[1541]: time="2025-01-30T13:57:00.885497901Z" level=error msg="Failed to destroy network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.885721 containerd[1541]: time="2025-01-30T13:57:00.885708312Z" level=error msg="encountered an error cleaning up failed sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.885807 containerd[1541]: time="2025-01-30T13:57:00.885766357Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5fbb54b-s4whm,Uid:20fb559c-2d4a-483a-b704-96d08f23fd99,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891263129Z" level=error msg="Failed to destroy network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891446294Z" level=error msg="encountered an error cleaning up failed sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891469292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-768b4d69bb-4xhph,Uid:f7a5bac7-0b52-4463-87d2-7adae530692a,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891522807Z" level=error msg="Failed to destroy network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891678761Z" level=error msg="encountered an error cleaning up failed sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891697748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k9nfh,Uid:7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891738073Z" level=error msg="Failed to destroy network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891879379Z" level=error msg="encountered an error cleaning up failed sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891887287Z" level=error msg="Failed to destroy network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.891929 containerd[1541]: time="2025-01-30T13:57:00.891898555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c9bkn,Uid:dad8a909-d142-4d1f-a2c5-4c37cc87955b,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.892427 containerd[1541]: time="2025-01-30T13:57:00.892367503Z" level=error msg="encountered an error cleaning up failed sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.892427 containerd[1541]: time="2025-01-30T13:57:00.892392824Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pf6d,Uid:7dc3c7ed-e794-482c-b0b8-b5cd489cc602,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.907187 kubelet[2781]: E0130 13:57:00.885122 2781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.907407 kubelet[2781]: E0130 13:57:00.892260 2781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.907407 kubelet[2781]: E0130 13:57:00.907326 2781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.907407 kubelet[2781]: E0130 13:57:00.907348 2781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.907407 kubelet[2781]: E0130 13:57:00.907359 2781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.907489 kubelet[2781]: E0130 13:57:00.907395 2781 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:00.921596 kubelet[2781]: E0130 13:57:00.921435 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d5fbb54b-5kv8d" Jan 30 13:57:00.921596 kubelet[2781]: E0130 13:57:00.921451 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-c9bkn" Jan 30 13:57:00.921596 kubelet[2781]: E0130 13:57:00.921457 2781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d5fbb54b-5kv8d" Jan 30 13:57:00.921596 kubelet[2781]: E0130 13:57:00.921465 2781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-c9bkn" Jan 30 13:57:00.921706 kubelet[2781]: E0130 13:57:00.921490 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-c9bkn_kube-system(dad8a909-d142-4d1f-a2c5-4c37cc87955b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-c9bkn_kube-system(dad8a909-d142-4d1f-a2c5-4c37cc87955b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-c9bkn" podUID="dad8a909-d142-4d1f-a2c5-4c37cc87955b" Jan 30 13:57:00.921706 kubelet[2781]: E0130 13:57:00.921491 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d5fbb54b-5kv8d_calico-apiserver(87735560-600e-4fca-8313-7ffee3249515)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d5fbb54b-5kv8d_calico-apiserver(87735560-600e-4fca-8313-7ffee3249515)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d5fbb54b-5kv8d" podUID="87735560-600e-4fca-8313-7ffee3249515" Jan 30 13:57:00.921706 kubelet[2781]: E0130 13:57:00.921512 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-768b4d69bb-4xhph" Jan 30 13:57:00.921812 kubelet[2781]: E0130 13:57:00.921517 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5pf6d" Jan 30 13:57:00.921812 kubelet[2781]: E0130 13:57:00.921522 2781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-768b4d69bb-4xhph" Jan 30 13:57:00.921812 kubelet[2781]: E0130 13:57:00.921527 2781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7db6d8ff4d-5pf6d" Jan 30 13:57:00.921867 kubelet[2781]: E0130 13:57:00.921538 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-768b4d69bb-4xhph_calico-system(f7a5bac7-0b52-4463-87d2-7adae530692a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-768b4d69bb-4xhph_calico-system(f7a5bac7-0b52-4463-87d2-7adae530692a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-768b4d69bb-4xhph" podUID="f7a5bac7-0b52-4463-87d2-7adae530692a" Jan 30 13:57:00.921867 kubelet[2781]: E0130 13:57:00.921543 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-5pf6d_kube-system(7dc3c7ed-e794-482c-b0b8-b5cd489cc602)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-5pf6d_kube-system(7dc3c7ed-e794-482c-b0b8-b5cd489cc602)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5pf6d" podUID="7dc3c7ed-e794-482c-b0b8-b5cd489cc602" Jan 30 13:57:00.921867 kubelet[2781]: E0130 13:57:00.921553 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k9nfh" Jan 30 13:57:00.921944 kubelet[2781]: E0130 13:57:00.921435 2781 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d5fbb54b-s4whm" Jan 30 13:57:00.921944 kubelet[2781]: E0130 13:57:00.921562 2781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-k9nfh" Jan 30 13:57:00.921944 kubelet[2781]: E0130 13:57:00.921564 2781 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-57d5fbb54b-s4whm" Jan 30 13:57:00.922002 kubelet[2781]: E0130 13:57:00.921574 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-k9nfh_calico-system(7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-k9nfh_calico-system(7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k9nfh" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" Jan 30 13:57:00.922002 kubelet[2781]: E0130 13:57:00.921579 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-57d5fbb54b-s4whm_calico-apiserver(20fb559c-2d4a-483a-b704-96d08f23fd99)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-57d5fbb54b-s4whm_calico-apiserver(20fb559c-2d4a-483a-b704-96d08f23fd99)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d5fbb54b-s4whm" podUID="20fb559c-2d4a-483a-b704-96d08f23fd99" Jan 30 13:57:01.617518 kubelet[2781]: I0130 13:57:01.617499 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:01.619023 kubelet[2781]: I0130 13:57:01.618888 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:01.630136 kubelet[2781]: I0130 13:57:01.630112 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:01.631050 kubelet[2781]: I0130 13:57:01.630945 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:01.632239 kubelet[2781]: I0130 13:57:01.631604 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:01.632239 kubelet[2781]: I0130 13:57:01.632081 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:01.661480 containerd[1541]: time="2025-01-30T13:57:01.660870278Z" level=info msg="StopPodSandbox for \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\"" Jan 30 13:57:01.661480 containerd[1541]: time="2025-01-30T13:57:01.661387229Z" level=info msg="StopPodSandbox for \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\"" Jan 30 13:57:01.662861 containerd[1541]: time="2025-01-30T13:57:01.661925235Z" level=info msg="Ensure that sandbox c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7 in task-service has been cleanup successfully" Jan 30 13:57:01.663241 containerd[1541]: time="2025-01-30T13:57:01.663022370Z" level=info msg="StopPodSandbox for \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\"" Jan 30 13:57:01.663241 containerd[1541]: time="2025-01-30T13:57:01.663111539Z" level=info msg="Ensure that sandbox ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846 in task-service has been cleanup successfully" Jan 30 13:57:01.663936 containerd[1541]: time="2025-01-30T13:57:01.663925553Z" level=info msg="StopPodSandbox for \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\"" Jan 30 13:57:01.664078 containerd[1541]: time="2025-01-30T13:57:01.664068722Z" level=info msg="Ensure that sandbox 12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35 in task-service has been cleanup successfully" Jan 30 13:57:01.664451 containerd[1541]: time="2025-01-30T13:57:01.664435339Z" level=info msg="Ensure that sandbox 120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864 in task-service has been cleanup successfully" Jan 30 13:57:01.664621 containerd[1541]: time="2025-01-30T13:57:01.664608303Z" level=info msg="StopPodSandbox for \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\"" Jan 30 13:57:01.664686 containerd[1541]: time="2025-01-30T13:57:01.664674165Z" level=info msg="Ensure that sandbox e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737 in task-service has been cleanup successfully" Jan 30 13:57:01.665359 containerd[1541]: time="2025-01-30T13:57:01.665317399Z" level=info msg="StopPodSandbox for \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\"" Jan 30 13:57:01.665473 containerd[1541]: time="2025-01-30T13:57:01.665396777Z" level=info msg="Ensure that sandbox 50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9 in task-service has been cleanup successfully" Jan 30 13:57:01.704365 containerd[1541]: time="2025-01-30T13:57:01.704329648Z" level=error msg="StopPodSandbox for \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\" failed" error="failed to destroy network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:01.704718 kubelet[2781]: E0130 13:57:01.704594 2781 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:01.704718 kubelet[2781]: E0130 13:57:01.704643 2781 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7"} Jan 30 13:57:01.704718 kubelet[2781]: E0130 13:57:01.704684 2781 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"dad8a909-d142-4d1f-a2c5-4c37cc87955b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:57:01.704718 kubelet[2781]: E0130 13:57:01.704698 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"dad8a909-d142-4d1f-a2c5-4c37cc87955b\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-c9bkn" podUID="dad8a909-d142-4d1f-a2c5-4c37cc87955b" Jan 30 13:57:01.710314 containerd[1541]: time="2025-01-30T13:57:01.710264397Z" level=error msg="StopPodSandbox for \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\" failed" error="failed to destroy network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:01.710557 kubelet[2781]: E0130 13:57:01.710534 2781 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:01.710594 kubelet[2781]: E0130 13:57:01.710566 2781 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864"} Jan 30 13:57:01.710614 kubelet[2781]: E0130 13:57:01.710588 2781 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:57:01.710614 kubelet[2781]: E0130 13:57:01.710607 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-k9nfh" podUID="7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5" Jan 30 13:57:01.718118 containerd[1541]: time="2025-01-30T13:57:01.717892559Z" level=error msg="StopPodSandbox for \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\" failed" error="failed to destroy network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:01.718283 kubelet[2781]: E0130 13:57:01.718031 2781 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:01.718283 kubelet[2781]: E0130 13:57:01.718060 2781 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846"} Jan 30 13:57:01.718283 kubelet[2781]: E0130 13:57:01.718082 2781 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"87735560-600e-4fca-8313-7ffee3249515\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:57:01.718283 kubelet[2781]: E0130 13:57:01.718095 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"87735560-600e-4fca-8313-7ffee3249515\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d5fbb54b-5kv8d" podUID="87735560-600e-4fca-8313-7ffee3249515" Jan 30 13:57:01.718905 containerd[1541]: time="2025-01-30T13:57:01.718873557Z" level=error msg="StopPodSandbox for \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\" failed" error="failed to destroy network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:01.719167 containerd[1541]: time="2025-01-30T13:57:01.718913701Z" level=error msg="StopPodSandbox for \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\" failed" error="failed to destroy network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:01.719291 kubelet[2781]: E0130 13:57:01.719267 2781 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:01.719291 kubelet[2781]: E0130 13:57:01.719288 2781 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9"} Jan 30 13:57:01.719386 kubelet[2781]: E0130 13:57:01.719305 2781 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7dc3c7ed-e794-482c-b0b8-b5cd489cc602\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:57:01.719386 kubelet[2781]: E0130 13:57:01.719316 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7dc3c7ed-e794-482c-b0b8-b5cd489cc602\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7db6d8ff4d-5pf6d" podUID="7dc3c7ed-e794-482c-b0b8-b5cd489cc602" Jan 30 13:57:01.719386 kubelet[2781]: E0130 13:57:01.719379 2781 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:01.719481 kubelet[2781]: E0130 13:57:01.719391 2781 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737"} Jan 30 13:57:01.719481 kubelet[2781]: E0130 13:57:01.719405 2781 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"f7a5bac7-0b52-4463-87d2-7adae530692a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:57:01.719481 kubelet[2781]: E0130 13:57:01.719414 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"f7a5bac7-0b52-4463-87d2-7adae530692a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-768b4d69bb-4xhph" podUID="f7a5bac7-0b52-4463-87d2-7adae530692a" Jan 30 13:57:01.719759 containerd[1541]: time="2025-01-30T13:57:01.719741931Z" level=error msg="StopPodSandbox for \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\" failed" error="failed to destroy network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 30 13:57:01.719892 kubelet[2781]: E0130 13:57:01.719823 2781 remote_runtime.go:222] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:01.719892 kubelet[2781]: E0130 13:57:01.719845 2781 kuberuntime_manager.go:1375] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35"} Jan 30 13:57:01.719892 kubelet[2781]: E0130 13:57:01.719863 2781 kuberuntime_manager.go:1075] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"20fb559c-2d4a-483a-b704-96d08f23fd99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 30 13:57:01.719892 kubelet[2781]: E0130 13:57:01.719875 2781 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"20fb559c-2d4a-483a-b704-96d08f23fd99\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-57d5fbb54b-s4whm" podUID="20fb559c-2d4a-483a-b704-96d08f23fd99" Jan 30 13:57:04.807735 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4175492429.mount: Deactivated successfully. Jan 30 13:57:05.019027 containerd[1541]: time="2025-01-30T13:57:05.009782975Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 30 13:57:05.045239 containerd[1541]: time="2025-01-30T13:57:05.045180700Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:05.068172 containerd[1541]: time="2025-01-30T13:57:05.067883605Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:05.068250 containerd[1541]: time="2025-01-30T13:57:05.068198377Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:05.068891 containerd[1541]: time="2025-01-30T13:57:05.068875379Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 4.452706946s" Jan 30 13:57:05.068930 containerd[1541]: time="2025-01-30T13:57:05.068894490Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 30 13:57:05.164128 containerd[1541]: time="2025-01-30T13:57:05.164035635Z" level=info msg="CreateContainer within sandbox \"0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 30 13:57:05.235191 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount805257001.mount: Deactivated successfully. Jan 30 13:57:05.243440 containerd[1541]: time="2025-01-30T13:57:05.243417294Z" level=info msg="CreateContainer within sandbox \"0019d3be616ba369fd14c8fae47f711034bdcec1f1e1451c4f282509b6ddbc63\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e4226148cbfa24fd5c91b68396474aa34cfcc673be5d05640f407b2725fc75d4\"" Jan 30 13:57:05.254414 containerd[1541]: time="2025-01-30T13:57:05.253836342Z" level=info msg="StartContainer for \"e4226148cbfa24fd5c91b68396474aa34cfcc673be5d05640f407b2725fc75d4\"" Jan 30 13:57:05.325405 systemd[1]: Started cri-containerd-e4226148cbfa24fd5c91b68396474aa34cfcc673be5d05640f407b2725fc75d4.scope - libcontainer container e4226148cbfa24fd5c91b68396474aa34cfcc673be5d05640f407b2725fc75d4. Jan 30 13:57:05.344062 containerd[1541]: time="2025-01-30T13:57:05.344000490Z" level=info msg="StartContainer for \"e4226148cbfa24fd5c91b68396474aa34cfcc673be5d05640f407b2725fc75d4\" returns successfully" Jan 30 13:57:05.699290 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 30 13:57:05.703340 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 30 13:57:05.740169 kubelet[2781]: I0130 13:57:05.740096 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-fvx8x" podStartSLOduration=1.10185497 podStartE2EDuration="16.719599371s" podCreationTimestamp="2025-01-30 13:56:49 +0000 UTC" firstStartedPulling="2025-01-30 13:56:49.503167828 +0000 UTC m=+22.038777985" lastFinishedPulling="2025-01-30 13:57:05.120912229 +0000 UTC m=+37.656522386" observedRunningTime="2025-01-30 13:57:05.719532123 +0000 UTC m=+38.255142289" watchObservedRunningTime="2025-01-30 13:57:05.719599371 +0000 UTC m=+38.255209528" Jan 30 13:57:11.242386 kubelet[2781]: I0130 13:57:11.242147 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:14.532360 containerd[1541]: time="2025-01-30T13:57:14.532319708Z" level=info msg="StopPodSandbox for \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\"" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.596 [INFO][4119] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.597 [INFO][4119] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" iface="eth0" netns="/var/run/netns/cni-e8031145-a7ff-a4b6-37a5-bbe76e8d5613" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.597 [INFO][4119] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" iface="eth0" netns="/var/run/netns/cni-e8031145-a7ff-a4b6-37a5-bbe76e8d5613" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.600 [INFO][4119] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" iface="eth0" netns="/var/run/netns/cni-e8031145-a7ff-a4b6-37a5-bbe76e8d5613" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.600 [INFO][4119] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.600 [INFO][4119] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.839 [INFO][4138] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.841 [INFO][4138] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.842 [INFO][4138] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.852 [WARNING][4138] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.852 [INFO][4138] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.853 [INFO][4138] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:14.858154 containerd[1541]: 2025-01-30 13:57:14.854 [INFO][4119] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:14.857971 systemd[1]: run-netns-cni\x2de8031145\x2da7ff\x2da4b6\x2d37a5\x2dbbe76e8d5613.mount: Deactivated successfully. Jan 30 13:57:14.860338 containerd[1541]: time="2025-01-30T13:57:14.860175993Z" level=info msg="TearDown network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\" successfully" Jan 30 13:57:14.860338 containerd[1541]: time="2025-01-30T13:57:14.860243404Z" level=info msg="StopPodSandbox for \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\" returns successfully" Jan 30 13:57:14.861507 containerd[1541]: time="2025-01-30T13:57:14.860979180Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5fbb54b-5kv8d,Uid:87735560-600e-4fca-8313-7ffee3249515,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:57:14.990298 systemd-networkd[1437]: cali194e6286628: Link UP Jan 30 13:57:14.990431 systemd-networkd[1437]: cali194e6286628: Gained carrier Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.898 [INFO][4146] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.910 [INFO][4146] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0 calico-apiserver-57d5fbb54b- calico-apiserver 87735560-600e-4fca-8313-7ffee3249515 783 0 2025-01-30 13:56:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d5fbb54b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57d5fbb54b-5kv8d eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali194e6286628 [] []}} ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-5kv8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.910 [INFO][4146] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-5kv8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.938 [INFO][4157] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" HandleID="k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.946 [INFO][4157] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" HandleID="k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291170), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57d5fbb54b-5kv8d", "timestamp":"2025-01-30 13:57:14.938340386 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.946 [INFO][4157] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.946 [INFO][4157] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.946 [INFO][4157] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.947 [INFO][4157] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.953 [INFO][4157] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.958 [INFO][4157] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.960 [INFO][4157] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.962 [INFO][4157] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.962 [INFO][4157] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.963 [INFO][4157] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.967 [INFO][4157] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.973 [INFO][4157] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.973 [INFO][4157] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" host="localhost" Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.973 [INFO][4157] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:14.998786 containerd[1541]: 2025-01-30 13:57:14.973 [INFO][4157] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" HandleID="k8s-pod-network.7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:15.001148 containerd[1541]: 2025-01-30 13:57:14.975 [INFO][4146] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-5kv8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0", GenerateName:"calico-apiserver-57d5fbb54b-", Namespace:"calico-apiserver", SelfLink:"", UID:"87735560-600e-4fca-8313-7ffee3249515", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5fbb54b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57d5fbb54b-5kv8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali194e6286628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:15.001148 containerd[1541]: 2025-01-30 13:57:14.975 [INFO][4146] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-5kv8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:15.001148 containerd[1541]: 2025-01-30 13:57:14.975 [INFO][4146] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali194e6286628 ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-5kv8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:15.001148 containerd[1541]: 2025-01-30 13:57:14.986 [INFO][4146] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-5kv8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:15.001148 containerd[1541]: 2025-01-30 13:57:14.986 [INFO][4146] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-5kv8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0", GenerateName:"calico-apiserver-57d5fbb54b-", Namespace:"calico-apiserver", SelfLink:"", UID:"87735560-600e-4fca-8313-7ffee3249515", ResourceVersion:"783", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5fbb54b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f", Pod:"calico-apiserver-57d5fbb54b-5kv8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali194e6286628", MAC:"7a:e9:79:0f:74:ae", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:15.001148 containerd[1541]: 2025-01-30 13:57:14.996 [INFO][4146] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-5kv8d" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:15.025906 containerd[1541]: time="2025-01-30T13:57:15.025184844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:15.025906 containerd[1541]: time="2025-01-30T13:57:15.025853020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:15.025906 containerd[1541]: time="2025-01-30T13:57:15.025862773Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:15.026464 containerd[1541]: time="2025-01-30T13:57:15.026176299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:15.050376 systemd[1]: Started cri-containerd-7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f.scope - libcontainer container 7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f. Jan 30 13:57:15.058576 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:57:15.085414 containerd[1541]: time="2025-01-30T13:57:15.085386769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5fbb54b-5kv8d,Uid:87735560-600e-4fca-8313-7ffee3249515,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f\"" Jan 30 13:57:15.087146 containerd[1541]: time="2025-01-30T13:57:15.087027632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:57:15.535285 containerd[1541]: time="2025-01-30T13:57:15.535037152Z" level=info msg="StopPodSandbox for \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\"" Jan 30 13:57:15.541753 containerd[1541]: time="2025-01-30T13:57:15.535629801Z" level=info msg="StopPodSandbox for \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\"" Jan 30 13:57:15.541753 containerd[1541]: time="2025-01-30T13:57:15.536256355Z" level=info msg="StopPodSandbox for \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\"" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.594 [INFO][4269] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.594 [INFO][4269] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" iface="eth0" netns="/var/run/netns/cni-90bd8327-e205-a9cb-f633-2812db64a5ea" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.595 [INFO][4269] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" iface="eth0" netns="/var/run/netns/cni-90bd8327-e205-a9cb-f633-2812db64a5ea" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.595 [INFO][4269] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" iface="eth0" netns="/var/run/netns/cni-90bd8327-e205-a9cb-f633-2812db64a5ea" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.595 [INFO][4269] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.595 [INFO][4269] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.630 [INFO][4289] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.630 [INFO][4289] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.630 [INFO][4289] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.642 [WARNING][4289] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.642 [INFO][4289] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.644 [INFO][4289] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:15.654877 containerd[1541]: 2025-01-30 13:57:15.648 [INFO][4269] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:15.657407 containerd[1541]: time="2025-01-30T13:57:15.657298656Z" level=info msg="TearDown network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\" successfully" Jan 30 13:57:15.658247 containerd[1541]: time="2025-01-30T13:57:15.657796488Z" level=info msg="StopPodSandbox for \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\" returns successfully" Jan 30 13:57:15.659764 containerd[1541]: time="2025-01-30T13:57:15.659526137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pf6d,Uid:7dc3c7ed-e794-482c-b0b8-b5cd489cc602,Namespace:kube-system,Attempt:1,}" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.615 [INFO][4254] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.616 [INFO][4254] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" iface="eth0" netns="/var/run/netns/cni-2994c280-206d-b3bf-fb5b-bc1a87d3b402" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.617 [INFO][4254] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" iface="eth0" netns="/var/run/netns/cni-2994c280-206d-b3bf-fb5b-bc1a87d3b402" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.619 [INFO][4254] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" iface="eth0" netns="/var/run/netns/cni-2994c280-206d-b3bf-fb5b-bc1a87d3b402" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.619 [INFO][4254] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.619 [INFO][4254] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.680 [INFO][4298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.681 [INFO][4298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.681 [INFO][4298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.697 [WARNING][4298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.697 [INFO][4298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.699 [INFO][4298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:15.708291 containerd[1541]: 2025-01-30 13:57:15.703 [INFO][4254] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:15.710961 containerd[1541]: time="2025-01-30T13:57:15.709175259Z" level=info msg="TearDown network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\" successfully" Jan 30 13:57:15.710961 containerd[1541]: time="2025-01-30T13:57:15.709195011Z" level=info msg="StopPodSandbox for \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\" returns successfully" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.610 [INFO][4268] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.610 [INFO][4268] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" iface="eth0" netns="/var/run/netns/cni-536cc7eb-cf2a-c650-5ce2-c8b08a00ed58" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.610 [INFO][4268] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" iface="eth0" netns="/var/run/netns/cni-536cc7eb-cf2a-c650-5ce2-c8b08a00ed58" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.611 [INFO][4268] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" iface="eth0" netns="/var/run/netns/cni-536cc7eb-cf2a-c650-5ce2-c8b08a00ed58" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.611 [INFO][4268] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.611 [INFO][4268] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.681 [INFO][4293] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.681 [INFO][4293] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.699 [INFO][4293] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.704 [WARNING][4293] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.704 [INFO][4293] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.706 [INFO][4293] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:15.710961 containerd[1541]: 2025-01-30 13:57:15.708 [INFO][4268] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:15.712527 containerd[1541]: time="2025-01-30T13:57:15.711770395Z" level=info msg="TearDown network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\" successfully" Jan 30 13:57:15.712527 containerd[1541]: time="2025-01-30T13:57:15.711785860Z" level=info msg="StopPodSandbox for \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\" returns successfully" Jan 30 13:57:15.713239 containerd[1541]: time="2025-01-30T13:57:15.713033279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k9nfh,Uid:7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5,Namespace:calico-system,Attempt:1,}" Jan 30 13:57:15.713315 containerd[1541]: time="2025-01-30T13:57:15.713303585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5fbb54b-s4whm,Uid:20fb559c-2d4a-483a-b704-96d08f23fd99,Namespace:calico-apiserver,Attempt:1,}" Jan 30 13:57:15.817557 systemd-networkd[1437]: cali7bb2ae87d01: Link UP Jan 30 13:57:15.819561 systemd-networkd[1437]: cali7bb2ae87d01: Gained carrier Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.720 [INFO][4317] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.731 [INFO][4317] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0 coredns-7db6d8ff4d- kube-system 7dc3c7ed-e794-482c-b0b8-b5cd489cc602 792 0 2025-01-30 13:56:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-5pf6d eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali7bb2ae87d01 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pf6d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5pf6d-" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.731 [INFO][4317] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pf6d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.771 [INFO][4347] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" HandleID="k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.784 [INFO][4347] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" HandleID="k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003195f0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-5pf6d", "timestamp":"2025-01-30 13:57:15.771530139 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.784 [INFO][4347] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.784 [INFO][4347] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.784 [INFO][4347] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.788 [INFO][4347] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.795 [INFO][4347] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.798 [INFO][4347] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.800 [INFO][4347] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.802 [INFO][4347] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.802 [INFO][4347] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.805 [INFO][4347] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.808 [INFO][4347] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.812 [INFO][4347] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.813 [INFO][4347] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" host="localhost" Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.813 [INFO][4347] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:15.828037 containerd[1541]: 2025-01-30 13:57:15.813 [INFO][4347] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" HandleID="k8s-pod-network.7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.829779 containerd[1541]: 2025-01-30 13:57:15.815 [INFO][4317] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pf6d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7dc3c7ed-e794-482c-b0b8-b5cd489cc602", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-5pf6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bb2ae87d01", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:15.829779 containerd[1541]: 2025-01-30 13:57:15.815 [INFO][4317] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pf6d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.829779 containerd[1541]: 2025-01-30 13:57:15.815 [INFO][4317] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7bb2ae87d01 ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pf6d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.829779 containerd[1541]: 2025-01-30 13:57:15.817 [INFO][4317] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pf6d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.829779 containerd[1541]: 2025-01-30 13:57:15.819 [INFO][4317] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pf6d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7dc3c7ed-e794-482c-b0b8-b5cd489cc602", ResourceVersion:"792", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a", Pod:"coredns-7db6d8ff4d-5pf6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bb2ae87d01", MAC:"ea:8d:fc:03:f6:c1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:15.829779 containerd[1541]: 2025-01-30 13:57:15.824 [INFO][4317] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a" Namespace="kube-system" Pod="coredns-7db6d8ff4d-5pf6d" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:15.851274 systemd-networkd[1437]: calic711333a458: Link UP Jan 30 13:57:15.853651 systemd-networkd[1437]: calic711333a458: Gained carrier Jan 30 13:57:15.862254 containerd[1541]: time="2025-01-30T13:57:15.861856904Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:15.862858 containerd[1541]: time="2025-01-30T13:57:15.862835485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:15.862933 containerd[1541]: time="2025-01-30T13:57:15.862919918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:15.863148 containerd[1541]: time="2025-01-30T13:57:15.863064537Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:15.863517 systemd[1]: run-netns-cni\x2d536cc7eb\x2dcf2a\x2dc650\x2d5ce2\x2dc8b08a00ed58.mount: Deactivated successfully. Jan 30 13:57:15.863579 systemd[1]: run-netns-cni\x2d2994c280\x2d206d\x2db3bf\x2dfb5b\x2dbc1a87d3b402.mount: Deactivated successfully. Jan 30 13:57:15.863614 systemd[1]: run-netns-cni\x2d90bd8327\x2de205\x2da9cb\x2df633\x2d2812db64a5ea.mount: Deactivated successfully. Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.755 [INFO][4329] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.763 [INFO][4329] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0 calico-apiserver-57d5fbb54b- calico-apiserver 20fb559c-2d4a-483a-b704-96d08f23fd99 794 0 2025-01-30 13:56:49 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:57d5fbb54b projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-57d5fbb54b-s4whm eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calic711333a458 [] []}} ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-s4whm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.763 [INFO][4329] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-s4whm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.786 [INFO][4356] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" HandleID="k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.794 [INFO][4356] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" HandleID="k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003199e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-57d5fbb54b-s4whm", "timestamp":"2025-01-30 13:57:15.786771215 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.794 [INFO][4356] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.813 [INFO][4356] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.813 [INFO][4356] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.814 [INFO][4356] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.818 [INFO][4356] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.826 [INFO][4356] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.829 [INFO][4356] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.832 [INFO][4356] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.832 [INFO][4356] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.833 [INFO][4356] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214 Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.838 [INFO][4356] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.842 [INFO][4356] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.842 [INFO][4356] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" host="localhost" Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.842 [INFO][4356] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:15.876356 containerd[1541]: 2025-01-30 13:57:15.842 [INFO][4356] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" HandleID="k8s-pod-network.feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.877420 containerd[1541]: 2025-01-30 13:57:15.845 [INFO][4329] cni-plugin/k8s.go 386: Populated endpoint ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-s4whm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0", GenerateName:"calico-apiserver-57d5fbb54b-", Namespace:"calico-apiserver", SelfLink:"", UID:"20fb559c-2d4a-483a-b704-96d08f23fd99", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5fbb54b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-57d5fbb54b-s4whm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic711333a458", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:15.877420 containerd[1541]: 2025-01-30 13:57:15.846 [INFO][4329] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-s4whm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.877420 containerd[1541]: 2025-01-30 13:57:15.846 [INFO][4329] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic711333a458 ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-s4whm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.877420 containerd[1541]: 2025-01-30 13:57:15.853 [INFO][4329] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-s4whm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.877420 containerd[1541]: 2025-01-30 13:57:15.854 [INFO][4329] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-s4whm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0", GenerateName:"calico-apiserver-57d5fbb54b-", Namespace:"calico-apiserver", SelfLink:"", UID:"20fb559c-2d4a-483a-b704-96d08f23fd99", ResourceVersion:"794", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5fbb54b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214", Pod:"calico-apiserver-57d5fbb54b-s4whm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic711333a458", MAC:"52:e9:3b:20:f4:44", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:15.877420 containerd[1541]: 2025-01-30 13:57:15.872 [INFO][4329] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214" Namespace="calico-apiserver" Pod="calico-apiserver-57d5fbb54b-s4whm" WorkloadEndpoint="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:15.890360 systemd[1]: Started cri-containerd-7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a.scope - libcontainer container 7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a. Jan 30 13:57:15.900303 systemd-networkd[1437]: cali87f69a9b50f: Link UP Jan 30 13:57:15.901359 systemd-networkd[1437]: cali87f69a9b50f: Gained carrier Jan 30 13:57:15.909786 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.748 [INFO][4328] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.763 [INFO][4328] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--k9nfh-eth0 csi-node-driver- calico-system 7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5 793 0 2025-01-30 13:56:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-k9nfh eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali87f69a9b50f [] []}} ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Namespace="calico-system" Pod="csi-node-driver-k9nfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9nfh-" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.764 [INFO][4328] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Namespace="calico-system" Pod="csi-node-driver-k9nfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.805 [INFO][4361] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" HandleID="k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.811 [INFO][4361] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" HandleID="k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031b8a0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-k9nfh", "timestamp":"2025-01-30 13:57:15.805017358 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.811 [INFO][4361] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.843 [INFO][4361] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.843 [INFO][4361] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.845 [INFO][4361] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.854 [INFO][4361] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.867 [INFO][4361] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.872 [INFO][4361] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.880 [INFO][4361] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.880 [INFO][4361] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.883 [INFO][4361] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.889 [INFO][4361] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.894 [INFO][4361] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.894 [INFO][4361] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" host="localhost" Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.894 [INFO][4361] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:15.916170 containerd[1541]: 2025-01-30 13:57:15.894 [INFO][4361] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" HandleID="k8s-pod-network.4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.917339 containerd[1541]: 2025-01-30 13:57:15.896 [INFO][4328] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Namespace="calico-system" Pod="csi-node-driver-k9nfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9nfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k9nfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-k9nfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87f69a9b50f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:15.917339 containerd[1541]: 2025-01-30 13:57:15.896 [INFO][4328] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Namespace="calico-system" Pod="csi-node-driver-k9nfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.917339 containerd[1541]: 2025-01-30 13:57:15.896 [INFO][4328] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87f69a9b50f ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Namespace="calico-system" Pod="csi-node-driver-k9nfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.917339 containerd[1541]: 2025-01-30 13:57:15.901 [INFO][4328] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Namespace="calico-system" Pod="csi-node-driver-k9nfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.917339 containerd[1541]: 2025-01-30 13:57:15.903 [INFO][4328] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Namespace="calico-system" Pod="csi-node-driver-k9nfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9nfh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k9nfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5", ResourceVersion:"793", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf", Pod:"csi-node-driver-k9nfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87f69a9b50f", MAC:"12:79:3b:d0:0b:e2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:15.917339 containerd[1541]: 2025-01-30 13:57:15.913 [INFO][4328] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf" Namespace="calico-system" Pod="csi-node-driver-k9nfh" WorkloadEndpoint="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:15.918006 containerd[1541]: time="2025-01-30T13:57:15.917423011Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:15.918006 containerd[1541]: time="2025-01-30T13:57:15.917810718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:15.918006 containerd[1541]: time="2025-01-30T13:57:15.917823322Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:15.920535 containerd[1541]: time="2025-01-30T13:57:15.917996432Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:15.935259 systemd[1]: Started cri-containerd-feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214.scope - libcontainer container feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214. Jan 30 13:57:15.945619 containerd[1541]: time="2025-01-30T13:57:15.945033641Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:15.945619 containerd[1541]: time="2025-01-30T13:57:15.945095728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:15.945619 containerd[1541]: time="2025-01-30T13:57:15.945122672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:15.945619 containerd[1541]: time="2025-01-30T13:57:15.945236993Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:15.953499 containerd[1541]: time="2025-01-30T13:57:15.953467687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5pf6d,Uid:7dc3c7ed-e794-482c-b0b8-b5cd489cc602,Namespace:kube-system,Attempt:1,} returns sandbox id \"7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a\"" Jan 30 13:57:15.956776 containerd[1541]: time="2025-01-30T13:57:15.956747707Z" level=info msg="CreateContainer within sandbox \"7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:57:15.958613 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:57:15.965325 systemd[1]: Started cri-containerd-4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf.scope - libcontainer container 4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf. Jan 30 13:57:15.976266 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:57:15.986354 containerd[1541]: time="2025-01-30T13:57:15.986302422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-k9nfh,Uid:7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5,Namespace:calico-system,Attempt:1,} returns sandbox id \"4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf\"" Jan 30 13:57:15.991867 containerd[1541]: time="2025-01-30T13:57:15.991794634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-57d5fbb54b-s4whm,Uid:20fb559c-2d4a-483a-b704-96d08f23fd99,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214\"" Jan 30 13:57:15.994400 containerd[1541]: time="2025-01-30T13:57:15.994108983Z" level=info msg="CreateContainer within sandbox \"7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"25c138ebe6ce5e29152c27349c08a8d09ce8e88b8845b0c054e72cb70fa56408\"" Jan 30 13:57:15.995021 containerd[1541]: time="2025-01-30T13:57:15.994825968Z" level=info msg="StartContainer for \"25c138ebe6ce5e29152c27349c08a8d09ce8e88b8845b0c054e72cb70fa56408\"" Jan 30 13:57:16.012390 systemd[1]: Started cri-containerd-25c138ebe6ce5e29152c27349c08a8d09ce8e88b8845b0c054e72cb70fa56408.scope - libcontainer container 25c138ebe6ce5e29152c27349c08a8d09ce8e88b8845b0c054e72cb70fa56408. Jan 30 13:57:16.035589 containerd[1541]: time="2025-01-30T13:57:16.035522811Z" level=info msg="StartContainer for \"25c138ebe6ce5e29152c27349c08a8d09ce8e88b8845b0c054e72cb70fa56408\" returns successfully" Jan 30 13:57:16.441369 systemd-networkd[1437]: cali194e6286628: Gained IPv6LL Jan 30 13:57:16.532568 containerd[1541]: time="2025-01-30T13:57:16.531928089Z" level=info msg="StopPodSandbox for \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\"" Jan 30 13:57:16.532568 containerd[1541]: time="2025-01-30T13:57:16.531928464Z" level=info msg="StopPodSandbox for \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\"" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.648 [INFO][4581] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.648 [INFO][4581] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" iface="eth0" netns="/var/run/netns/cni-234b4f1b-f8e8-6f72-8b03-8da507f5ac97" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.649 [INFO][4581] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" iface="eth0" netns="/var/run/netns/cni-234b4f1b-f8e8-6f72-8b03-8da507f5ac97" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.649 [INFO][4581] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" iface="eth0" netns="/var/run/netns/cni-234b4f1b-f8e8-6f72-8b03-8da507f5ac97" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.649 [INFO][4581] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.649 [INFO][4581] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.671 [INFO][4605] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.671 [INFO][4605] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.671 [INFO][4605] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.678 [WARNING][4605] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.678 [INFO][4605] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.679 [INFO][4605] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:16.682160 containerd[1541]: 2025-01-30 13:57:16.681 [INFO][4581] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:16.689744 containerd[1541]: time="2025-01-30T13:57:16.683387027Z" level=info msg="TearDown network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\" successfully" Jan 30 13:57:16.689744 containerd[1541]: time="2025-01-30T13:57:16.683407865Z" level=info msg="StopPodSandbox for \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\" returns successfully" Jan 30 13:57:16.689744 containerd[1541]: time="2025-01-30T13:57:16.684097399Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-768b4d69bb-4xhph,Uid:f7a5bac7-0b52-4463-87d2-7adae530692a,Namespace:calico-system,Attempt:1,}" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.653 [INFO][4588] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.653 [INFO][4588] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" iface="eth0" netns="/var/run/netns/cni-5c7c06ab-3cc3-4d98-6fb5-3bf65c09f751" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.655 [INFO][4588] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" iface="eth0" netns="/var/run/netns/cni-5c7c06ab-3cc3-4d98-6fb5-3bf65c09f751" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.655 [INFO][4588] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" iface="eth0" netns="/var/run/netns/cni-5c7c06ab-3cc3-4d98-6fb5-3bf65c09f751" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.655 [INFO][4588] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.655 [INFO][4588] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.674 [INFO][4608] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.674 [INFO][4608] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.679 [INFO][4608] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.684 [WARNING][4608] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.684 [INFO][4608] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.685 [INFO][4608] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:16.693420 containerd[1541]: 2025-01-30 13:57:16.690 [INFO][4588] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:16.698549 containerd[1541]: time="2025-01-30T13:57:16.695827031Z" level=info msg="TearDown network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\" successfully" Jan 30 13:57:16.698549 containerd[1541]: time="2025-01-30T13:57:16.696245720Z" level=info msg="StopPodSandbox for \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\" returns successfully" Jan 30 13:57:16.702259 containerd[1541]: time="2025-01-30T13:57:16.701758884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c9bkn,Uid:dad8a909-d142-4d1f-a2c5-4c37cc87955b,Namespace:kube-system,Attempt:1,}" Jan 30 13:57:16.797261 kubelet[2781]: I0130 13:57:16.797041 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5pf6d" podStartSLOduration=33.797028274 podStartE2EDuration="33.797028274s" podCreationTimestamp="2025-01-30 13:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:57:16.7541257 +0000 UTC m=+49.289735868" watchObservedRunningTime="2025-01-30 13:57:16.797028274 +0000 UTC m=+49.332638435" Jan 30 13:57:16.865239 systemd[1]: run-netns-cni\x2d234b4f1b\x2df8e8\x2d6f72\x2d8b03\x2d8da507f5ac97.mount: Deactivated successfully. Jan 30 13:57:16.865413 systemd[1]: run-netns-cni\x2d5c7c06ab\x2d3cc3\x2d4d98\x2d6fb5\x2d3bf65c09f751.mount: Deactivated successfully. Jan 30 13:57:17.044325 systemd-networkd[1437]: cali1de2acc5448: Link UP Jan 30 13:57:17.045658 systemd-networkd[1437]: cali1de2acc5448: Gained carrier Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.822 [INFO][4641] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.871 [INFO][4641] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0 calico-kube-controllers-768b4d69bb- calico-system f7a5bac7-0b52-4463-87d2-7adae530692a 814 0 2025-01-30 13:56:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:768b4d69bb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-768b4d69bb-4xhph eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali1de2acc5448 [] []}} ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Namespace="calico-system" Pod="calico-kube-controllers-768b4d69bb-4xhph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.871 [INFO][4641] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Namespace="calico-system" Pod="calico-kube-controllers-768b4d69bb-4xhph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.945 [INFO][4662] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.967 [INFO][4662] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051a40), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-768b4d69bb-4xhph", "timestamp":"2025-01-30 13:57:16.9452915 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.968 [INFO][4662] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.968 [INFO][4662] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.968 [INFO][4662] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.972 [INFO][4662] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.977 [INFO][4662] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.982 [INFO][4662] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.984 [INFO][4662] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.990 [INFO][4662] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:16.991 [INFO][4662] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:17.002 [INFO][4662] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658 Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:17.019 [INFO][4662] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:17.036 [INFO][4662] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:17.036 [INFO][4662] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" host="localhost" Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:17.036 [INFO][4662] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:17.075476 containerd[1541]: 2025-01-30 13:57:17.036 [INFO][4662] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:17.076004 containerd[1541]: 2025-01-30 13:57:17.038 [INFO][4641] cni-plugin/k8s.go 386: Populated endpoint ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Namespace="calico-system" Pod="calico-kube-controllers-768b4d69bb-4xhph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0", GenerateName:"calico-kube-controllers-768b4d69bb-", Namespace:"calico-system", SelfLink:"", UID:"f7a5bac7-0b52-4463-87d2-7adae530692a", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"768b4d69bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-768b4d69bb-4xhph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de2acc5448", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:17.076004 containerd[1541]: 2025-01-30 13:57:17.038 [INFO][4641] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Namespace="calico-system" Pod="calico-kube-controllers-768b4d69bb-4xhph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:17.076004 containerd[1541]: 2025-01-30 13:57:17.038 [INFO][4641] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1de2acc5448 ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Namespace="calico-system" Pod="calico-kube-controllers-768b4d69bb-4xhph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:17.076004 containerd[1541]: 2025-01-30 13:57:17.045 [INFO][4641] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Namespace="calico-system" Pod="calico-kube-controllers-768b4d69bb-4xhph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:17.076004 containerd[1541]: 2025-01-30 13:57:17.046 [INFO][4641] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Namespace="calico-system" Pod="calico-kube-controllers-768b4d69bb-4xhph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0", GenerateName:"calico-kube-controllers-768b4d69bb-", Namespace:"calico-system", SelfLink:"", UID:"f7a5bac7-0b52-4463-87d2-7adae530692a", ResourceVersion:"814", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"768b4d69bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658", Pod:"calico-kube-controllers-768b4d69bb-4xhph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de2acc5448", MAC:"62:ae:fb:3a:af:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:17.076004 containerd[1541]: 2025-01-30 13:57:17.070 [INFO][4641] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Namespace="calico-system" Pod="calico-kube-controllers-768b4d69bb-4xhph" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:17.116821 systemd-networkd[1437]: cali2c3f72aba2d: Link UP Jan 30 13:57:17.118409 systemd-networkd[1437]: cali2c3f72aba2d: Gained carrier Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:16.776 [INFO][4623] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:16.825 [INFO][4623] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0 coredns-7db6d8ff4d- kube-system dad8a909-d142-4d1f-a2c5-4c37cc87955b 815 0 2025-01-30 13:56:43 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7db6d8ff4d projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-7db6d8ff4d-c9bkn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c3f72aba2d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c9bkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c9bkn-" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:16.825 [INFO][4623] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c9bkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:16.952 [INFO][4658] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" HandleID="k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:16.970 [INFO][4658] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" HandleID="k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000137730), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-7db6d8ff4d-c9bkn", "timestamp":"2025-01-30 13:57:16.952964134 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:16.970 [INFO][4658] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.036 [INFO][4658] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.036 [INFO][4658] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.039 [INFO][4658] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.046 [INFO][4658] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.054 [INFO][4658] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.071 [INFO][4658] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.079 [INFO][4658] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.079 [INFO][4658] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.082 [INFO][4658] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26 Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.092 [INFO][4658] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.108 [INFO][4658] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.109 [INFO][4658] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" host="localhost" Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.109 [INFO][4658] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:17.141024 containerd[1541]: 2025-01-30 13:57:17.109 [INFO][4658] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" HandleID="k8s-pod-network.ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:17.145499 containerd[1541]: 2025-01-30 13:57:17.111 [INFO][4623] cni-plugin/k8s.go 386: Populated endpoint ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c9bkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dad8a909-d142-4d1f-a2c5-4c37cc87955b", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-7db6d8ff4d-c9bkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c3f72aba2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:17.145499 containerd[1541]: 2025-01-30 13:57:17.111 [INFO][4623] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c9bkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:17.145499 containerd[1541]: 2025-01-30 13:57:17.111 [INFO][4623] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c3f72aba2d ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c9bkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:17.145499 containerd[1541]: 2025-01-30 13:57:17.119 [INFO][4623] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c9bkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:17.145499 containerd[1541]: 2025-01-30 13:57:17.119 [INFO][4623] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c9bkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dad8a909-d142-4d1f-a2c5-4c37cc87955b", ResourceVersion:"815", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26", Pod:"coredns-7db6d8ff4d-c9bkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c3f72aba2d", MAC:"c2:cd:f3:d2:4c:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:17.145499 containerd[1541]: 2025-01-30 13:57:17.138 [INFO][4623] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26" Namespace="kube-system" Pod="coredns-7db6d8ff4d-c9bkn" WorkloadEndpoint="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:17.202770 containerd[1541]: time="2025-01-30T13:57:17.202435794Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:17.202770 containerd[1541]: time="2025-01-30T13:57:17.202579467Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:17.202770 containerd[1541]: time="2025-01-30T13:57:17.202595380Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:17.202770 containerd[1541]: time="2025-01-30T13:57:17.202690163Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:17.209994 systemd-networkd[1437]: cali87f69a9b50f: Gained IPv6LL Jan 30 13:57:17.220658 containerd[1541]: time="2025-01-30T13:57:17.220581636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 30 13:57:17.220789 containerd[1541]: time="2025-01-30T13:57:17.220638350Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 30 13:57:17.220789 containerd[1541]: time="2025-01-30T13:57:17.220647238Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:17.221397 containerd[1541]: time="2025-01-30T13:57:17.221331879Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 30 13:57:17.242378 systemd[1]: Started cri-containerd-ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26.scope - libcontainer container ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26. Jan 30 13:57:17.249982 systemd[1]: Started cri-containerd-9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658.scope - libcontainer container 9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658. Jan 30 13:57:17.261275 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:57:17.268465 systemd-resolved[1438]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 30 13:57:17.304636 containerd[1541]: time="2025-01-30T13:57:17.304496115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-768b4d69bb-4xhph,Uid:f7a5bac7-0b52-4463-87d2-7adae530692a,Namespace:calico-system,Attempt:1,} returns sandbox id \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\"" Jan 30 13:57:17.317863 containerd[1541]: time="2025-01-30T13:57:17.317762015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-c9bkn,Uid:dad8a909-d142-4d1f-a2c5-4c37cc87955b,Namespace:kube-system,Attempt:1,} returns sandbox id \"ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26\"" Jan 30 13:57:17.332835 containerd[1541]: time="2025-01-30T13:57:17.332697374Z" level=info msg="CreateContainer within sandbox \"ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 30 13:57:17.399618 containerd[1541]: time="2025-01-30T13:57:17.399529854Z" level=info msg="CreateContainer within sandbox \"ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7cb6e6781c0f121966b8c7ac1e061a4c32e1484842571c892312effa3358d260\"" Jan 30 13:57:17.400274 containerd[1541]: time="2025-01-30T13:57:17.400193011Z" level=info msg="StartContainer for \"7cb6e6781c0f121966b8c7ac1e061a4c32e1484842571c892312effa3358d260\"" Jan 30 13:57:17.407946 kubelet[2781]: I0130 13:57:17.407780 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:17.437384 systemd[1]: Started cri-containerd-7cb6e6781c0f121966b8c7ac1e061a4c32e1484842571c892312effa3358d260.scope - libcontainer container 7cb6e6781c0f121966b8c7ac1e061a4c32e1484842571c892312effa3358d260. Jan 30 13:57:17.487401 containerd[1541]: time="2025-01-30T13:57:17.487338681Z" level=info msg="StartContainer for \"7cb6e6781c0f121966b8c7ac1e061a4c32e1484842571c892312effa3358d260\" returns successfully" Jan 30 13:57:17.593570 systemd-networkd[1437]: calic711333a458: Gained IPv6LL Jan 30 13:57:17.657776 systemd-networkd[1437]: cali7bb2ae87d01: Gained IPv6LL Jan 30 13:57:17.784241 kubelet[2781]: I0130 13:57:17.783542 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-c9bkn" podStartSLOduration=34.783528517 podStartE2EDuration="34.783528517s" podCreationTimestamp="2025-01-30 13:56:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-30 13:57:17.7832732 +0000 UTC m=+50.318883358" watchObservedRunningTime="2025-01-30 13:57:17.783528517 +0000 UTC m=+50.319139178" Jan 30 13:57:18.247246 kernel: bpftool[4861]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 30 13:57:18.275954 containerd[1541]: time="2025-01-30T13:57:18.275920959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:18.276954 containerd[1541]: time="2025-01-30T13:57:18.276899705Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 30 13:57:18.277401 containerd[1541]: time="2025-01-30T13:57:18.277376053Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:18.279278 containerd[1541]: time="2025-01-30T13:57:18.279258369Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:18.281283 containerd[1541]: time="2025-01-30T13:57:18.281257436Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 3.19411214s" Jan 30 13:57:18.281283 containerd[1541]: time="2025-01-30T13:57:18.281283392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:57:18.285964 containerd[1541]: time="2025-01-30T13:57:18.285926521Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 30 13:57:18.289646 containerd[1541]: time="2025-01-30T13:57:18.289620870Z" level=info msg="CreateContainer within sandbox \"7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:57:18.299895 systemd-networkd[1437]: cali1de2acc5448: Gained IPv6LL Jan 30 13:57:18.303717 containerd[1541]: time="2025-01-30T13:57:18.303689942Z" level=info msg="CreateContainer within sandbox \"7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d7028db0a1b4719ff9af9c169324c37fa18422aa1311971c58b342346104b622\"" Jan 30 13:57:18.305370 containerd[1541]: time="2025-01-30T13:57:18.305347243Z" level=info msg="StartContainer for \"d7028db0a1b4719ff9af9c169324c37fa18422aa1311971c58b342346104b622\"" Jan 30 13:57:18.307506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2579496139.mount: Deactivated successfully. Jan 30 13:57:18.352370 systemd[1]: Started cri-containerd-d7028db0a1b4719ff9af9c169324c37fa18422aa1311971c58b342346104b622.scope - libcontainer container d7028db0a1b4719ff9af9c169324c37fa18422aa1311971c58b342346104b622. Jan 30 13:57:18.387693 containerd[1541]: time="2025-01-30T13:57:18.387664484Z" level=info msg="StartContainer for \"d7028db0a1b4719ff9af9c169324c37fa18422aa1311971c58b342346104b622\" returns successfully" Jan 30 13:57:18.552835 systemd-networkd[1437]: vxlan.calico: Link UP Jan 30 13:57:18.552844 systemd-networkd[1437]: vxlan.calico: Gained carrier Jan 30 13:57:18.558459 systemd-networkd[1437]: cali2c3f72aba2d: Gained IPv6LL Jan 30 13:57:18.794300 kubelet[2781]: I0130 13:57:18.794261 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57d5fbb54b-5kv8d" podStartSLOduration=26.595099205 podStartE2EDuration="29.794248983s" podCreationTimestamp="2025-01-30 13:56:49 +0000 UTC" firstStartedPulling="2025-01-30 13:57:15.086257766 +0000 UTC m=+47.621867922" lastFinishedPulling="2025-01-30 13:57:18.285407541 +0000 UTC m=+50.821017700" observedRunningTime="2025-01-30 13:57:18.761763718 +0000 UTC m=+51.297373885" watchObservedRunningTime="2025-01-30 13:57:18.794248983 +0000 UTC m=+51.329859143" Jan 30 13:57:18.859161 systemd[1]: run-containerd-runc-k8s.io-d7028db0a1b4719ff9af9c169324c37fa18422aa1311971c58b342346104b622-runc.Y3zrVj.mount: Deactivated successfully. Jan 30 13:57:19.739702 kubelet[2781]: I0130 13:57:19.739668 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:19.876841 containerd[1541]: time="2025-01-30T13:57:19.876800549Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:19.884134 containerd[1541]: time="2025-01-30T13:57:19.884091879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 30 13:57:19.893090 containerd[1541]: time="2025-01-30T13:57:19.893048047Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:19.898391 containerd[1541]: time="2025-01-30T13:57:19.898331424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:19.898868 containerd[1541]: time="2025-01-30T13:57:19.898685103Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.612734399s" Jan 30 13:57:19.898868 containerd[1541]: time="2025-01-30T13:57:19.898707782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 30 13:57:19.899737 containerd[1541]: time="2025-01-30T13:57:19.899620461Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 30 13:57:19.903293 containerd[1541]: time="2025-01-30T13:57:19.903145780Z" level=info msg="CreateContainer within sandbox \"4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 30 13:57:19.923310 containerd[1541]: time="2025-01-30T13:57:19.923070819Z" level=info msg="CreateContainer within sandbox \"4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ddb1ec1525465d9ffdefa48a60f599046490d60b3dab4ddc1c8889ee56201f64\"" Jan 30 13:57:19.923904 containerd[1541]: time="2025-01-30T13:57:19.923874666Z" level=info msg="StartContainer for \"ddb1ec1525465d9ffdefa48a60f599046490d60b3dab4ddc1c8889ee56201f64\"" Jan 30 13:57:19.971467 systemd[1]: Started cri-containerd-ddb1ec1525465d9ffdefa48a60f599046490d60b3dab4ddc1c8889ee56201f64.scope - libcontainer container ddb1ec1525465d9ffdefa48a60f599046490d60b3dab4ddc1c8889ee56201f64. Jan 30 13:57:20.000810 containerd[1541]: time="2025-01-30T13:57:20.000598899Z" level=info msg="StartContainer for \"ddb1ec1525465d9ffdefa48a60f599046490d60b3dab4ddc1c8889ee56201f64\" returns successfully" Jan 30 13:57:20.280036 containerd[1541]: time="2025-01-30T13:57:20.279691126Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:20.280690 containerd[1541]: time="2025-01-30T13:57:20.280438877Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 30 13:57:20.281522 containerd[1541]: time="2025-01-30T13:57:20.281502067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 381.759863ms" Jan 30 13:57:20.281607 containerd[1541]: time="2025-01-30T13:57:20.281522990Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 30 13:57:20.283232 containerd[1541]: time="2025-01-30T13:57:20.282363950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 30 13:57:20.284196 containerd[1541]: time="2025-01-30T13:57:20.284180019Z" level=info msg="CreateContainer within sandbox \"feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 30 13:57:20.298798 containerd[1541]: time="2025-01-30T13:57:20.298761893Z" level=info msg="CreateContainer within sandbox \"feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"ad39b4b532608efb44f4fa2c4d20d096d7afe96e834fc2c1c95e7269f208dd02\"" Jan 30 13:57:20.299757 containerd[1541]: time="2025-01-30T13:57:20.299714670Z" level=info msg="StartContainer for \"ad39b4b532608efb44f4fa2c4d20d096d7afe96e834fc2c1c95e7269f208dd02\"" Jan 30 13:57:20.324341 systemd[1]: Started cri-containerd-ad39b4b532608efb44f4fa2c4d20d096d7afe96e834fc2c1c95e7269f208dd02.scope - libcontainer container ad39b4b532608efb44f4fa2c4d20d096d7afe96e834fc2c1c95e7269f208dd02. Jan 30 13:57:20.357752 containerd[1541]: time="2025-01-30T13:57:20.357719673Z" level=info msg="StartContainer for \"ad39b4b532608efb44f4fa2c4d20d096d7afe96e834fc2c1c95e7269f208dd02\" returns successfully" Jan 30 13:57:20.537322 systemd-networkd[1437]: vxlan.calico: Gained IPv6LL Jan 30 13:57:20.751364 kubelet[2781]: I0130 13:57:20.751326 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-57d5fbb54b-s4whm" podStartSLOduration=27.461949364 podStartE2EDuration="31.751311473s" podCreationTimestamp="2025-01-30 13:56:49 +0000 UTC" firstStartedPulling="2025-01-30 13:57:15.992817693 +0000 UTC m=+48.528427850" lastFinishedPulling="2025-01-30 13:57:20.282179803 +0000 UTC m=+52.817789959" observedRunningTime="2025-01-30 13:57:20.750271866 +0000 UTC m=+53.285882032" watchObservedRunningTime="2025-01-30 13:57:20.751311473 +0000 UTC m=+53.286921639" Jan 30 13:57:21.771594 kubelet[2781]: I0130 13:57:21.771472 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:22.675334 containerd[1541]: time="2025-01-30T13:57:22.675294719Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:22.694769 containerd[1541]: time="2025-01-30T13:57:22.691607718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 30 13:57:22.704773 containerd[1541]: time="2025-01-30T13:57:22.704706156Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:22.715826 containerd[1541]: time="2025-01-30T13:57:22.715788339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:22.726438 containerd[1541]: time="2025-01-30T13:57:22.716275601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.433888154s" Jan 30 13:57:22.726438 containerd[1541]: time="2025-01-30T13:57:22.716296768Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 30 13:57:22.726438 containerd[1541]: time="2025-01-30T13:57:22.717245381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 30 13:57:22.763414 containerd[1541]: time="2025-01-30T13:57:22.763354523Z" level=info msg="CreateContainer within sandbox \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 30 13:57:23.074389 containerd[1541]: time="2025-01-30T13:57:23.073965508Z" level=info msg="CreateContainer within sandbox \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\"" Jan 30 13:57:23.074767 containerd[1541]: time="2025-01-30T13:57:23.074747110Z" level=info msg="StartContainer for \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\"" Jan 30 13:57:23.100379 systemd[1]: Started cri-containerd-0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5.scope - libcontainer container 0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5. Jan 30 13:57:23.135805 containerd[1541]: time="2025-01-30T13:57:23.135779109Z" level=info msg="StartContainer for \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\" returns successfully" Jan 30 13:57:23.991321 kubelet[2781]: I0130 13:57:23.991226 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-768b4d69bb-4xhph" podStartSLOduration=29.582783798 podStartE2EDuration="34.99087492s" podCreationTimestamp="2025-01-30 13:56:49 +0000 UTC" firstStartedPulling="2025-01-30 13:57:17.308988591 +0000 UTC m=+49.844598749" lastFinishedPulling="2025-01-30 13:57:22.717079709 +0000 UTC m=+55.252689871" observedRunningTime="2025-01-30 13:57:23.948474774 +0000 UTC m=+56.484084933" watchObservedRunningTime="2025-01-30 13:57:23.99087492 +0000 UTC m=+56.526485075" Jan 30 13:57:24.874136 containerd[1541]: time="2025-01-30T13:57:24.874077926Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:24.879317 containerd[1541]: time="2025-01-30T13:57:24.879290268Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 30 13:57:24.883748 containerd[1541]: time="2025-01-30T13:57:24.883722601Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:24.887590 containerd[1541]: time="2025-01-30T13:57:24.887562327Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 30 13:57:24.888219 containerd[1541]: time="2025-01-30T13:57:24.887905871Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.170642506s" Jan 30 13:57:24.888219 containerd[1541]: time="2025-01-30T13:57:24.887924703Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 30 13:57:24.975426 containerd[1541]: time="2025-01-30T13:57:24.975388093Z" level=info msg="CreateContainer within sandbox \"4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 30 13:57:25.004787 containerd[1541]: time="2025-01-30T13:57:25.004747382Z" level=info msg="CreateContainer within sandbox \"4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5ba6563c2898860d3affc5b8b83ed567563940b7cae887824923002b40c67325\"" Jan 30 13:57:25.005598 containerd[1541]: time="2025-01-30T13:57:25.005317899Z" level=info msg="StartContainer for \"5ba6563c2898860d3affc5b8b83ed567563940b7cae887824923002b40c67325\"" Jan 30 13:57:25.030350 systemd[1]: Started cri-containerd-5ba6563c2898860d3affc5b8b83ed567563940b7cae887824923002b40c67325.scope - libcontainer container 5ba6563c2898860d3affc5b8b83ed567563940b7cae887824923002b40c67325. Jan 30 13:57:25.046996 containerd[1541]: time="2025-01-30T13:57:25.046965020Z" level=info msg="StartContainer for \"5ba6563c2898860d3affc5b8b83ed567563940b7cae887824923002b40c67325\" returns successfully" Jan 30 13:57:25.820293 kubelet[2781]: I0130 13:57:25.820204 2781 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 30 13:57:25.827543 kubelet[2781]: I0130 13:57:25.827488 2781 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 30 13:57:25.957514 kubelet[2781]: I0130 13:57:25.957209 2781 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-k9nfh" podStartSLOduration=28.056235344 podStartE2EDuration="36.957192414s" podCreationTimestamp="2025-01-30 13:56:49 +0000 UTC" firstStartedPulling="2025-01-30 13:57:15.987406628 +0000 UTC m=+48.523016785" lastFinishedPulling="2025-01-30 13:57:24.888363696 +0000 UTC m=+57.423973855" observedRunningTime="2025-01-30 13:57:25.956641928 +0000 UTC m=+58.492252087" watchObservedRunningTime="2025-01-30 13:57:25.957192414 +0000 UTC m=+58.492802575" Jan 30 13:57:27.838402 containerd[1541]: time="2025-01-30T13:57:27.838370969Z" level=info msg="StopPodSandbox for \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\"" Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.119 [WARNING][5218] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0", GenerateName:"calico-apiserver-57d5fbb54b-", Namespace:"calico-apiserver", SelfLink:"", UID:"20fb559c-2d4a-483a-b704-96d08f23fd99", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5fbb54b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214", Pod:"calico-apiserver-57d5fbb54b-s4whm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic711333a458", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.121 [INFO][5218] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.121 [INFO][5218] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" iface="eth0" netns="" Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.121 [INFO][5218] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.121 [INFO][5218] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.196 [INFO][5224] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.198 [INFO][5224] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.198 [INFO][5224] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.204 [WARNING][5224] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.204 [INFO][5224] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.205 [INFO][5224] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.209066 containerd[1541]: 2025-01-30 13:57:28.207 [INFO][5218] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:28.212670 containerd[1541]: time="2025-01-30T13:57:28.212621563Z" level=info msg="TearDown network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\" successfully" Jan 30 13:57:28.212670 containerd[1541]: time="2025-01-30T13:57:28.212655685Z" level=info msg="StopPodSandbox for \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\" returns successfully" Jan 30 13:57:28.258611 containerd[1541]: time="2025-01-30T13:57:28.258576839Z" level=info msg="RemovePodSandbox for \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\"" Jan 30 13:57:28.258707 containerd[1541]: time="2025-01-30T13:57:28.258623978Z" level=info msg="Forcibly stopping sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\"" Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.303 [WARNING][5242] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0", GenerateName:"calico-apiserver-57d5fbb54b-", Namespace:"calico-apiserver", SelfLink:"", UID:"20fb559c-2d4a-483a-b704-96d08f23fd99", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5fbb54b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"feb92366601085c03fe65be26e1cdcd9013d2ed030d4513d69973c5b87edc214", Pod:"calico-apiserver-57d5fbb54b-s4whm", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calic711333a458", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.303 [INFO][5242] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.303 [INFO][5242] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" iface="eth0" netns="" Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.303 [INFO][5242] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.303 [INFO][5242] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.332 [INFO][5249] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.332 [INFO][5249] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.332 [INFO][5249] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.336 [WARNING][5249] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.336 [INFO][5249] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" HandleID="k8s-pod-network.12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--s4whm-eth0" Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.337 [INFO][5249] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.339800 containerd[1541]: 2025-01-30 13:57:28.338 [INFO][5242] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35" Jan 30 13:57:28.344366 containerd[1541]: time="2025-01-30T13:57:28.339830271Z" level=info msg="TearDown network for sandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\" successfully" Jan 30 13:57:28.354162 containerd[1541]: time="2025-01-30T13:57:28.354132215Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:28.373156 containerd[1541]: time="2025-01-30T13:57:28.373120481Z" level=info msg="RemovePodSandbox \"12f0c3aa82930fa4175318f845b303e8879242b1c7ee301dcb58fcffde715c35\" returns successfully" Jan 30 13:57:28.377043 containerd[1541]: time="2025-01-30T13:57:28.377020668Z" level=info msg="StopPodSandbox for \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\"" Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.405 [WARNING][5267] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dad8a909-d142-4d1f-a2c5-4c37cc87955b", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26", Pod:"coredns-7db6d8ff4d-c9bkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c3f72aba2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.405 [INFO][5267] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.405 [INFO][5267] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" iface="eth0" netns="" Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.405 [INFO][5267] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.405 [INFO][5267] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.436 [INFO][5273] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.436 [INFO][5273] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.437 [INFO][5273] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.440 [WARNING][5273] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.440 [INFO][5273] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.440 [INFO][5273] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.444900 containerd[1541]: 2025-01-30 13:57:28.442 [INFO][5267] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:28.446931 containerd[1541]: time="2025-01-30T13:57:28.444944177Z" level=info msg="TearDown network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\" successfully" Jan 30 13:57:28.446931 containerd[1541]: time="2025-01-30T13:57:28.444962672Z" level=info msg="StopPodSandbox for \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\" returns successfully" Jan 30 13:57:28.446931 containerd[1541]: time="2025-01-30T13:57:28.445389129Z" level=info msg="RemovePodSandbox for \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\"" Jan 30 13:57:28.446931 containerd[1541]: time="2025-01-30T13:57:28.445403518Z" level=info msg="Forcibly stopping sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\"" Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.480 [WARNING][5292] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"dad8a909-d142-4d1f-a2c5-4c37cc87955b", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"ade1fd77eab8649e1d9ba2b0e5e57aa13d59910d5a34f3764059be5fad505b26", Pod:"coredns-7db6d8ff4d-c9bkn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c3f72aba2d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.480 [INFO][5292] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.480 [INFO][5292] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" iface="eth0" netns="" Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.480 [INFO][5292] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.480 [INFO][5292] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.504 [INFO][5298] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.504 [INFO][5298] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.504 [INFO][5298] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.507 [WARNING][5298] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.507 [INFO][5298] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" HandleID="k8s-pod-network.c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Workload="localhost-k8s-coredns--7db6d8ff4d--c9bkn-eth0" Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.508 [INFO][5298] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.510351 containerd[1541]: 2025-01-30 13:57:28.509 [INFO][5292] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7" Jan 30 13:57:28.510351 containerd[1541]: time="2025-01-30T13:57:28.510110787Z" level=info msg="TearDown network for sandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\" successfully" Jan 30 13:57:28.540504 containerd[1541]: time="2025-01-30T13:57:28.540472838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:28.540679 containerd[1541]: time="2025-01-30T13:57:28.540523968Z" level=info msg="RemovePodSandbox \"c7605ff688b97b49659fbcca018dc7a79c5671a064117372177a6d4472ab57d7\" returns successfully" Jan 30 13:57:28.540901 containerd[1541]: time="2025-01-30T13:57:28.540838504Z" level=info msg="StopPodSandbox for \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\"" Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.605 [WARNING][5316] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7dc3c7ed-e794-482c-b0b8-b5cd489cc602", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a", Pod:"coredns-7db6d8ff4d-5pf6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bb2ae87d01", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.605 [INFO][5316] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.605 [INFO][5316] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" iface="eth0" netns="" Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.605 [INFO][5316] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.605 [INFO][5316] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.634 [INFO][5322] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.634 [INFO][5322] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.634 [INFO][5322] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.637 [WARNING][5322] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.637 [INFO][5322] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.638 [INFO][5322] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.640023 containerd[1541]: 2025-01-30 13:57:28.639 [INFO][5316] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:28.640479 containerd[1541]: time="2025-01-30T13:57:28.640046362Z" level=info msg="TearDown network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\" successfully" Jan 30 13:57:28.640479 containerd[1541]: time="2025-01-30T13:57:28.640061399Z" level=info msg="StopPodSandbox for \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\" returns successfully" Jan 30 13:57:28.640479 containerd[1541]: time="2025-01-30T13:57:28.640430224Z" level=info msg="RemovePodSandbox for \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\"" Jan 30 13:57:28.640479 containerd[1541]: time="2025-01-30T13:57:28.640446337Z" level=info msg="Forcibly stopping sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\"" Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.659 [WARNING][5340] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0", GenerateName:"coredns-7db6d8ff4d-", Namespace:"kube-system", SelfLink:"", UID:"7dc3c7ed-e794-482c-b0b8-b5cd489cc602", ResourceVersion:"819", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7db6d8ff4d", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7dd19db55d741cc2eefa42ddcaa6e607a95be05134c1d15949944cf62ab2ed5a", Pod:"coredns-7db6d8ff4d-5pf6d", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali7bb2ae87d01", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.659 [INFO][5340] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.659 [INFO][5340] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" iface="eth0" netns="" Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.659 [INFO][5340] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.659 [INFO][5340] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.676 [INFO][5346] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.676 [INFO][5346] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.676 [INFO][5346] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.680 [WARNING][5346] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.680 [INFO][5346] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" HandleID="k8s-pod-network.50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Workload="localhost-k8s-coredns--7db6d8ff4d--5pf6d-eth0" Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.680 [INFO][5346] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.682878 containerd[1541]: 2025-01-30 13:57:28.681 [INFO][5340] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9" Jan 30 13:57:28.682878 containerd[1541]: time="2025-01-30T13:57:28.682624648Z" level=info msg="TearDown network for sandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\" successfully" Jan 30 13:57:28.700959 containerd[1541]: time="2025-01-30T13:57:28.700906611Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:28.700959 containerd[1541]: time="2025-01-30T13:57:28.700950045Z" level=info msg="RemovePodSandbox \"50a9b3c28d70d1a436a7d96035e9cf3094699b4cbe8bf395b0b0e2c2f7fe11e9\" returns successfully" Jan 30 13:57:28.701283 containerd[1541]: time="2025-01-30T13:57:28.701268816Z" level=info msg="StopPodSandbox for \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\"" Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.723 [WARNING][5365] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0", GenerateName:"calico-apiserver-57d5fbb54b-", Namespace:"calico-apiserver", SelfLink:"", UID:"87735560-600e-4fca-8313-7ffee3249515", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5fbb54b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f", Pod:"calico-apiserver-57d5fbb54b-5kv8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali194e6286628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.723 [INFO][5365] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.723 [INFO][5365] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" iface="eth0" netns="" Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.723 [INFO][5365] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.723 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.737 [INFO][5371] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.738 [INFO][5371] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.738 [INFO][5371] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.742 [WARNING][5371] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.742 [INFO][5371] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.743 [INFO][5371] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.745812 containerd[1541]: 2025-01-30 13:57:28.744 [INFO][5365] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:28.745812 containerd[1541]: time="2025-01-30T13:57:28.745548020Z" level=info msg="TearDown network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\" successfully" Jan 30 13:57:28.745812 containerd[1541]: time="2025-01-30T13:57:28.745568733Z" level=info msg="StopPodSandbox for \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\" returns successfully" Jan 30 13:57:28.754994 containerd[1541]: time="2025-01-30T13:57:28.745956120Z" level=info msg="RemovePodSandbox for \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\"" Jan 30 13:57:28.754994 containerd[1541]: time="2025-01-30T13:57:28.745975823Z" level=info msg="Forcibly stopping sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\"" Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.779 [WARNING][5389] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0", GenerateName:"calico-apiserver-57d5fbb54b-", Namespace:"calico-apiserver", SelfLink:"", UID:"87735560-600e-4fca-8313-7ffee3249515", ResourceVersion:"851", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"57d5fbb54b", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"7eed98dae984e5dfae11320987e4cc5f520621af470b662cdfc504f8eb9fdc8f", Pod:"calico-apiserver-57d5fbb54b-5kv8d", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali194e6286628", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.780 [INFO][5389] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.780 [INFO][5389] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" iface="eth0" netns="" Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.780 [INFO][5389] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.780 [INFO][5389] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.796 [INFO][5396] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.796 [INFO][5396] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.796 [INFO][5396] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.817 [WARNING][5396] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.817 [INFO][5396] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" HandleID="k8s-pod-network.ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Workload="localhost-k8s-calico--apiserver--57d5fbb54b--5kv8d-eth0" Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.818 [INFO][5396] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.820594 containerd[1541]: 2025-01-30 13:57:28.819 [INFO][5389] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846" Jan 30 13:57:28.821022 containerd[1541]: time="2025-01-30T13:57:28.820595723Z" level=info msg="TearDown network for sandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\" successfully" Jan 30 13:57:28.848889 containerd[1541]: time="2025-01-30T13:57:28.848859262Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:28.849148 containerd[1541]: time="2025-01-30T13:57:28.848905103Z" level=info msg="RemovePodSandbox \"ac6b56d56d94abac2dba7237f48aedc79ec5bbcf241d73a82ed29aca3f6f8846\" returns successfully" Jan 30 13:57:28.856295 containerd[1541]: time="2025-01-30T13:57:28.849171547Z" level=info msg="StopPodSandbox for \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\"" Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.889 [WARNING][5415] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0", GenerateName:"calico-kube-controllers-768b4d69bb-", Namespace:"calico-system", SelfLink:"", UID:"f7a5bac7-0b52-4463-87d2-7adae530692a", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"768b4d69bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658", Pod:"calico-kube-controllers-768b4d69bb-4xhph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de2acc5448", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.890 [INFO][5415] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.890 [INFO][5415] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" iface="eth0" netns="" Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.890 [INFO][5415] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.890 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.914 [INFO][5422] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.915 [INFO][5422] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.915 [INFO][5422] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.919 [WARNING][5422] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.919 [INFO][5422] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.920 [INFO][5422] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.922497 containerd[1541]: 2025-01-30 13:57:28.921 [INFO][5415] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:28.922974 containerd[1541]: time="2025-01-30T13:57:28.922548991Z" level=info msg="TearDown network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\" successfully" Jan 30 13:57:28.922974 containerd[1541]: time="2025-01-30T13:57:28.922568342Z" level=info msg="StopPodSandbox for \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\" returns successfully" Jan 30 13:57:28.923011 containerd[1541]: time="2025-01-30T13:57:28.922972337Z" level=info msg="RemovePodSandbox for \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\"" Jan 30 13:57:28.923011 containerd[1541]: time="2025-01-30T13:57:28.922999724Z" level=info msg="Forcibly stopping sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\"" Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.948 [WARNING][5440] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0", GenerateName:"calico-kube-controllers-768b4d69bb-", Namespace:"calico-system", SelfLink:"", UID:"f7a5bac7-0b52-4463-87d2-7adae530692a", ResourceVersion:"883", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"768b4d69bb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658", Pod:"calico-kube-controllers-768b4d69bb-4xhph", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali1de2acc5448", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.948 [INFO][5440] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.948 [INFO][5440] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" iface="eth0" netns="" Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.948 [INFO][5440] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.948 [INFO][5440] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.960 [INFO][5446] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.961 [INFO][5446] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.961 [INFO][5446] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.964 [WARNING][5446] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.964 [INFO][5446] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" HandleID="k8s-pod-network.e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.965 [INFO][5446] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:28.966945 containerd[1541]: 2025-01-30 13:57:28.966 [INFO][5440] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737" Jan 30 13:57:28.967291 containerd[1541]: time="2025-01-30T13:57:28.966980823Z" level=info msg="TearDown network for sandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\" successfully" Jan 30 13:57:28.972492 containerd[1541]: time="2025-01-30T13:57:28.972465996Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:28.972553 containerd[1541]: time="2025-01-30T13:57:28.972505837Z" level=info msg="RemovePodSandbox \"e0b0a230491599d1ea528cfd166e38d33740092bcb1720c37657ae8f6a606737\" returns successfully" Jan 30 13:57:28.972801 containerd[1541]: time="2025-01-30T13:57:28.972786684Z" level=info msg="StopPodSandbox for \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\"" Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:28.995 [WARNING][5464] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k9nfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf", Pod:"csi-node-driver-k9nfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87f69a9b50f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:28.995 [INFO][5464] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:28.995 [INFO][5464] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" iface="eth0" netns="" Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:28.995 [INFO][5464] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:28.995 [INFO][5464] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:29.011 [INFO][5473] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:29.011 [INFO][5473] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:29.011 [INFO][5473] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:29.015 [WARNING][5473] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:29.015 [INFO][5473] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:29.016 [INFO][5473] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:29.019144 containerd[1541]: 2025-01-30 13:57:29.018 [INFO][5464] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:29.019463 containerd[1541]: time="2025-01-30T13:57:29.019180708Z" level=info msg="TearDown network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\" successfully" Jan 30 13:57:29.019463 containerd[1541]: time="2025-01-30T13:57:29.019197611Z" level=info msg="StopPodSandbox for \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\" returns successfully" Jan 30 13:57:29.019585 containerd[1541]: time="2025-01-30T13:57:29.019567022Z" level=info msg="RemovePodSandbox for \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\"" Jan 30 13:57:29.019649 containerd[1541]: time="2025-01-30T13:57:29.019632877Z" level=info msg="Forcibly stopping sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\"" Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.054 [WARNING][5496] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--k9nfh-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bdeb187-27dc-4c7e-aa2a-c05d3d3268f5", ResourceVersion:"894", Generation:0, CreationTimestamp:time.Date(2025, time.January, 30, 13, 56, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"4e075473d37f144f9e156c40487cba6e4ebe34fd80c76b1abe0a1fe7956dbcdf", Pod:"csi-node-driver-k9nfh", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali87f69a9b50f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.054 [INFO][5496] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.054 [INFO][5496] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" iface="eth0" netns="" Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.054 [INFO][5496] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.054 [INFO][5496] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.069 [INFO][5503] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.069 [INFO][5503] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.069 [INFO][5503] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.072 [WARNING][5503] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.072 [INFO][5503] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" HandleID="k8s-pod-network.120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Workload="localhost-k8s-csi--node--driver--k9nfh-eth0" Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.073 [INFO][5503] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:29.075025 containerd[1541]: 2025-01-30 13:57:29.074 [INFO][5496] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864" Jan 30 13:57:29.075025 containerd[1541]: time="2025-01-30T13:57:29.075004724Z" level=info msg="TearDown network for sandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\" successfully" Jan 30 13:57:29.089939 containerd[1541]: time="2025-01-30T13:57:29.089888088Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:57:29.089939 containerd[1541]: time="2025-01-30T13:57:29.089922444Z" level=info msg="RemovePodSandbox \"120d6885a6cc4ef3304b7e1f7e73f3c1516ac2726d6746a9f21b36d6a34ef864\" returns successfully" Jan 30 13:57:30.379986 systemd[1]: run-containerd-runc-k8s.io-0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5-runc.EWX2xI.mount: Deactivated successfully. Jan 30 13:57:42.144290 systemd[1]: Started sshd@7-139.178.70.103:22-139.178.68.195:48454.service - OpenSSH per-connection server daemon (139.178.68.195:48454). Jan 30 13:57:42.240376 sshd[5588]: Accepted publickey for core from 139.178.68.195 port 48454 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:57:42.241199 sshd[5588]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:42.244700 systemd-logind[1518]: New session 10 of user core. Jan 30 13:57:42.249297 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 30 13:57:42.716985 sshd[5588]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:42.718631 systemd[1]: sshd@7-139.178.70.103:22-139.178.68.195:48454.service: Deactivated successfully. Jan 30 13:57:42.724677 systemd[1]: session-10.scope: Deactivated successfully. Jan 30 13:57:42.726275 systemd-logind[1518]: Session 10 logged out. Waiting for processes to exit. Jan 30 13:57:42.726933 systemd-logind[1518]: Removed session 10. Jan 30 13:57:47.725723 systemd[1]: Started sshd@8-139.178.70.103:22-139.178.68.195:51206.service - OpenSSH per-connection server daemon (139.178.68.195:51206). Jan 30 13:57:47.769077 sshd[5606]: Accepted publickey for core from 139.178.68.195 port 51206 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:57:47.769998 sshd[5606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:47.772763 systemd-logind[1518]: New session 11 of user core. Jan 30 13:57:47.779348 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 30 13:57:47.907182 sshd[5606]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:47.908880 systemd-logind[1518]: Session 11 logged out. Waiting for processes to exit. Jan 30 13:57:47.909999 systemd[1]: sshd@8-139.178.70.103:22-139.178.68.195:51206.service: Deactivated successfully. Jan 30 13:57:47.911758 systemd[1]: session-11.scope: Deactivated successfully. Jan 30 13:57:47.912196 systemd-logind[1518]: Removed session 11. Jan 30 13:57:51.159074 containerd[1541]: time="2025-01-30T13:57:51.159002635Z" level=info msg="StopContainer for \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\" with timeout 300 (s)" Jan 30 13:57:51.168766 containerd[1541]: time="2025-01-30T13:57:51.168732444Z" level=info msg="Stop container \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\" with signal terminated" Jan 30 13:57:51.196936 containerd[1541]: time="2025-01-30T13:57:51.196763032Z" level=info msg="StopContainer for \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\" with timeout 30 (s)" Jan 30 13:57:51.197298 containerd[1541]: time="2025-01-30T13:57:51.197281889Z" level=info msg="Stop container \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\" with signal terminated" Jan 30 13:57:51.236094 systemd[1]: cri-containerd-0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5.scope: Deactivated successfully. Jan 30 13:57:51.383810 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5-rootfs.mount: Deactivated successfully. Jan 30 13:57:51.475194 containerd[1541]: time="2025-01-30T13:57:51.460503334Z" level=info msg="shim disconnected" id=0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5 namespace=k8s.io Jan 30 13:57:51.520335 containerd[1541]: time="2025-01-30T13:57:51.520207074Z" level=warning msg="cleaning up after shim disconnected" id=0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5 namespace=k8s.io Jan 30 13:57:51.520335 containerd[1541]: time="2025-01-30T13:57:51.520248891Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:51.670285 containerd[1541]: time="2025-01-30T13:57:51.670190738Z" level=info msg="StopContainer for \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\" returns successfully" Jan 30 13:57:51.841116 containerd[1541]: time="2025-01-30T13:57:51.840354604Z" level=info msg="StopPodSandbox for \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\"" Jan 30 13:57:51.841116 containerd[1541]: time="2025-01-30T13:57:51.841103480Z" level=info msg="Container to stop \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:57:51.846064 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658-shm.mount: Deactivated successfully. Jan 30 13:57:51.851677 systemd[1]: cri-containerd-9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658.scope: Deactivated successfully. Jan 30 13:57:51.872116 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658-rootfs.mount: Deactivated successfully. Jan 30 13:57:51.872466 containerd[1541]: time="2025-01-30T13:57:51.872427718Z" level=info msg="shim disconnected" id=9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658 namespace=k8s.io Jan 30 13:57:51.872466 containerd[1541]: time="2025-01-30T13:57:51.872460789Z" level=warning msg="cleaning up after shim disconnected" id=9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658 namespace=k8s.io Jan 30 13:57:51.872621 containerd[1541]: time="2025-01-30T13:57:51.872613964Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:51.883253 containerd[1541]: time="2025-01-30T13:57:51.881770157Z" level=warning msg="cleanup warnings time=\"2025-01-30T13:57:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 30 13:57:52.158322 systemd-networkd[1437]: cali1de2acc5448: Link DOWN Jan 30 13:57:52.158327 systemd-networkd[1437]: cali1de2acc5448: Lost carrier Jan 30 13:57:52.257060 kubelet[2781]: I0130 13:57:52.117394 2781 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.107 [INFO][5701] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.108 [INFO][5701] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" iface="eth0" netns="/var/run/netns/cni-8799cec2-b7d6-8873-13bf-a13c02a262be" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.109 [INFO][5701] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" iface="eth0" netns="/var/run/netns/cni-8799cec2-b7d6-8873-13bf-a13c02a262be" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.167 [INFO][5701] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" after=58.594311ms iface="eth0" netns="/var/run/netns/cni-8799cec2-b7d6-8873-13bf-a13c02a262be" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.167 [INFO][5701] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.167 [INFO][5701] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.371 [INFO][5710] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.371 [INFO][5710] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.372 [INFO][5710] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.550 [INFO][5710] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.550 [INFO][5710] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.552 [INFO][5710] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:57:52.555553 containerd[1541]: 2025-01-30 13:57:52.554 [INFO][5701] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:57:52.561558 containerd[1541]: time="2025-01-30T13:57:52.558403732Z" level=info msg="TearDown network for sandbox \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\" successfully" Jan 30 13:57:52.561558 containerd[1541]: time="2025-01-30T13:57:52.558426012Z" level=info msg="StopPodSandbox for \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\" returns successfully" Jan 30 13:57:52.559030 systemd[1]: run-netns-cni\x2d8799cec2\x2db7d6\x2d8873\x2d13bf\x2da13c02a262be.mount: Deactivated successfully. Jan 30 13:57:52.920399 systemd[1]: Started sshd@9-139.178.70.103:22-139.178.68.195:51208.service - OpenSSH per-connection server daemon (139.178.68.195:51208). Jan 30 13:57:53.096055 kubelet[2781]: I0130 13:57:53.095899 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7a5bac7-0b52-4463-87d2-7adae530692a-tigera-ca-bundle\") pod \"f7a5bac7-0b52-4463-87d2-7adae530692a\" (UID: \"f7a5bac7-0b52-4463-87d2-7adae530692a\") " Jan 30 13:57:53.096055 kubelet[2781]: I0130 13:57:53.095955 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlpxz\" (UniqueName: \"kubernetes.io/projected/f7a5bac7-0b52-4463-87d2-7adae530692a-kube-api-access-hlpxz\") pod \"f7a5bac7-0b52-4463-87d2-7adae530692a\" (UID: \"f7a5bac7-0b52-4463-87d2-7adae530692a\") " Jan 30 13:57:53.107924 systemd[1]: var-lib-kubelet-pods-f7a5bac7\x2d0b52\x2d4463\x2d87d2\x2d7adae530692a-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dkube\x2dcontrollers-1.mount: Deactivated successfully. Jan 30 13:57:53.111258 systemd[1]: var-lib-kubelet-pods-f7a5bac7\x2d0b52\x2d4463\x2d87d2\x2d7adae530692a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhlpxz.mount: Deactivated successfully. Jan 30 13:57:53.121156 kubelet[2781]: I0130 13:57:53.119380 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7a5bac7-0b52-4463-87d2-7adae530692a-kube-api-access-hlpxz" (OuterVolumeSpecName: "kube-api-access-hlpxz") pod "f7a5bac7-0b52-4463-87d2-7adae530692a" (UID: "f7a5bac7-0b52-4463-87d2-7adae530692a"). InnerVolumeSpecName "kube-api-access-hlpxz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:57:53.121319 kubelet[2781]: I0130 13:57:53.119327 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f7a5bac7-0b52-4463-87d2-7adae530692a-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "f7a5bac7-0b52-4463-87d2-7adae530692a" (UID: "f7a5bac7-0b52-4463-87d2-7adae530692a"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:57:53.136925 sshd[5732]: Accepted publickey for core from 139.178.68.195 port 51208 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:57:53.137839 sshd[5732]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:53.140805 systemd-logind[1518]: New session 12 of user core. Jan 30 13:57:53.145373 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 30 13:57:53.202944 kubelet[2781]: I0130 13:57:53.202837 2781 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f7a5bac7-0b52-4463-87d2-7adae530692a-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 30 13:57:53.202944 kubelet[2781]: I0130 13:57:53.202878 2781 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-hlpxz\" (UniqueName: \"kubernetes.io/projected/f7a5bac7-0b52-4463-87d2-7adae530692a-kube-api-access-hlpxz\") on node \"localhost\" DevicePath \"\"" Jan 30 13:57:53.270102 sshd[5732]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:53.275940 systemd[1]: sshd@9-139.178.70.103:22-139.178.68.195:51208.service: Deactivated successfully. Jan 30 13:57:53.277149 systemd[1]: session-12.scope: Deactivated successfully. Jan 30 13:57:53.278002 systemd-logind[1518]: Session 12 logged out. Waiting for processes to exit. Jan 30 13:57:53.282423 systemd[1]: Started sshd@10-139.178.70.103:22-139.178.68.195:51222.service - OpenSSH per-connection server daemon (139.178.68.195:51222). Jan 30 13:57:53.283799 systemd-logind[1518]: Removed session 12. Jan 30 13:57:53.312234 sshd[5753]: Accepted publickey for core from 139.178.68.195 port 51222 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:57:53.313029 sshd[5753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:53.315543 systemd-logind[1518]: New session 13 of user core. Jan 30 13:57:53.321340 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 30 13:57:53.434562 systemd[1]: Removed slice kubepods-besteffort-podf7a5bac7_0b52_4463_87d2_7adae530692a.slice - libcontainer container kubepods-besteffort-podf7a5bac7_0b52_4463_87d2_7adae530692a.slice. Jan 30 13:57:53.530461 sshd[5753]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:53.537919 systemd[1]: sshd@10-139.178.70.103:22-139.178.68.195:51222.service: Deactivated successfully. Jan 30 13:57:53.538422 kubelet[2781]: I0130 13:57:53.538099 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7a5bac7-0b52-4463-87d2-7adae530692a" path="/var/lib/kubelet/pods/f7a5bac7-0b52-4463-87d2-7adae530692a/volumes" Jan 30 13:57:53.540167 systemd[1]: session-13.scope: Deactivated successfully. Jan 30 13:57:53.541502 systemd-logind[1518]: Session 13 logged out. Waiting for processes to exit. Jan 30 13:57:53.546749 systemd[1]: Started sshd@11-139.178.70.103:22-139.178.68.195:51232.service - OpenSSH per-connection server daemon (139.178.68.195:51232). Jan 30 13:57:53.551790 systemd-logind[1518]: Removed session 13. Jan 30 13:57:53.581947 sshd[5771]: Accepted publickey for core from 139.178.68.195 port 51232 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:57:53.583247 sshd[5771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:53.588357 systemd-logind[1518]: New session 14 of user core. Jan 30 13:57:53.593114 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 30 13:57:53.708726 sshd[5771]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:53.711930 systemd[1]: sshd@11-139.178.70.103:22-139.178.68.195:51232.service: Deactivated successfully. Jan 30 13:57:53.714382 systemd[1]: session-14.scope: Deactivated successfully. Jan 30 13:57:53.716917 systemd-logind[1518]: Session 14 logged out. Waiting for processes to exit. Jan 30 13:57:53.718098 systemd-logind[1518]: Removed session 14. Jan 30 13:57:55.665066 systemd[1]: cri-containerd-34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739.scope: Deactivated successfully. Jan 30 13:57:55.681743 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739-rootfs.mount: Deactivated successfully. Jan 30 13:57:55.682413 containerd[1541]: time="2025-01-30T13:57:55.682379561Z" level=info msg="shim disconnected" id=34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739 namespace=k8s.io Jan 30 13:57:55.682413 containerd[1541]: time="2025-01-30T13:57:55.682411263Z" level=warning msg="cleaning up after shim disconnected" id=34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739 namespace=k8s.io Jan 30 13:57:55.682688 containerd[1541]: time="2025-01-30T13:57:55.682418383Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:55.698300 containerd[1541]: time="2025-01-30T13:57:55.698269477Z" level=info msg="StopContainer for \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\" returns successfully" Jan 30 13:57:55.698633 containerd[1541]: time="2025-01-30T13:57:55.698616417Z" level=info msg="StopPodSandbox for \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\"" Jan 30 13:57:55.698667 containerd[1541]: time="2025-01-30T13:57:55.698639036Z" level=info msg="Container to stop \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 30 13:57:55.702801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed-shm.mount: Deactivated successfully. Jan 30 13:57:55.710194 systemd[1]: cri-containerd-aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed.scope: Deactivated successfully. Jan 30 13:57:55.727020 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed-rootfs.mount: Deactivated successfully. Jan 30 13:57:55.731529 containerd[1541]: time="2025-01-30T13:57:55.731492801Z" level=info msg="shim disconnected" id=aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed namespace=k8s.io Jan 30 13:57:55.731529 containerd[1541]: time="2025-01-30T13:57:55.731526799Z" level=warning msg="cleaning up after shim disconnected" id=aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed namespace=k8s.io Jan 30 13:57:55.731625 containerd[1541]: time="2025-01-30T13:57:55.731533291Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 30 13:57:55.745636 containerd[1541]: time="2025-01-30T13:57:55.745607606Z" level=info msg="TearDown network for sandbox \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\" successfully" Jan 30 13:57:55.745636 containerd[1541]: time="2025-01-30T13:57:55.745633843Z" level=info msg="StopPodSandbox for \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\" returns successfully" Jan 30 13:57:55.815745 kubelet[2781]: I0130 13:57:55.815712 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ecd4f075-4727-480c-95e0-1f433844e122-typha-certs\") pod \"ecd4f075-4727-480c-95e0-1f433844e122\" (UID: \"ecd4f075-4727-480c-95e0-1f433844e122\") " Jan 30 13:57:55.815745 kubelet[2781]: I0130 13:57:55.815736 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd4f075-4727-480c-95e0-1f433844e122-tigera-ca-bundle\") pod \"ecd4f075-4727-480c-95e0-1f433844e122\" (UID: \"ecd4f075-4727-480c-95e0-1f433844e122\") " Jan 30 13:57:55.816429 kubelet[2781]: I0130 13:57:55.815753 2781 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pp4d4\" (UniqueName: \"kubernetes.io/projected/ecd4f075-4727-480c-95e0-1f433844e122-kube-api-access-pp4d4\") pod \"ecd4f075-4727-480c-95e0-1f433844e122\" (UID: \"ecd4f075-4727-480c-95e0-1f433844e122\") " Jan 30 13:57:55.820149 systemd[1]: var-lib-kubelet-pods-ecd4f075\x2d4727\x2d480c\x2d95e0\x2d1f433844e122-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dtypha-1.mount: Deactivated successfully. Jan 30 13:57:55.821858 systemd[1]: var-lib-kubelet-pods-ecd4f075\x2d4727\x2d480c\x2d95e0\x2d1f433844e122-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpp4d4.mount: Deactivated successfully. Jan 30 13:57:55.823840 kubelet[2781]: I0130 13:57:55.823757 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ecd4f075-4727-480c-95e0-1f433844e122-kube-api-access-pp4d4" (OuterVolumeSpecName: "kube-api-access-pp4d4") pod "ecd4f075-4727-480c-95e0-1f433844e122" (UID: "ecd4f075-4727-480c-95e0-1f433844e122"). InnerVolumeSpecName "kube-api-access-pp4d4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 30 13:57:55.824462 kubelet[2781]: I0130 13:57:55.824089 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ecd4f075-4727-480c-95e0-1f433844e122-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "ecd4f075-4727-480c-95e0-1f433844e122" (UID: "ecd4f075-4727-480c-95e0-1f433844e122"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 30 13:57:55.831286 kubelet[2781]: I0130 13:57:55.831200 2781 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ecd4f075-4727-480c-95e0-1f433844e122-typha-certs" (OuterVolumeSpecName: "typha-certs") pod "ecd4f075-4727-480c-95e0-1f433844e122" (UID: "ecd4f075-4727-480c-95e0-1f433844e122"). InnerVolumeSpecName "typha-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 30 13:57:55.916085 kubelet[2781]: I0130 13:57:55.916007 2781 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-pp4d4\" (UniqueName: \"kubernetes.io/projected/ecd4f075-4727-480c-95e0-1f433844e122-kube-api-access-pp4d4\") on node \"localhost\" DevicePath \"\"" Jan 30 13:57:55.916085 kubelet[2781]: I0130 13:57:55.916029 2781 reconciler_common.go:289] "Volume detached for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/ecd4f075-4727-480c-95e0-1f433844e122-typha-certs\") on node \"localhost\" DevicePath \"\"" Jan 30 13:57:55.916085 kubelet[2781]: I0130 13:57:55.916038 2781 reconciler_common.go:289] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ecd4f075-4727-480c-95e0-1f433844e122-tigera-ca-bundle\") on node \"localhost\" DevicePath \"\"" Jan 30 13:57:56.040155 systemd[1]: Removed slice kubepods-besteffort-podecd4f075_4727_480c_95e0_1f433844e122.slice - libcontainer container kubepods-besteffort-podecd4f075_4727_480c_95e0_1f433844e122.slice. Jan 30 13:57:56.050604 kubelet[2781]: I0130 13:57:56.050577 2781 scope.go:117] "RemoveContainer" containerID="34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739" Jan 30 13:57:56.072011 containerd[1541]: time="2025-01-30T13:57:56.071958952Z" level=info msg="RemoveContainer for \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\"" Jan 30 13:57:56.074030 containerd[1541]: time="2025-01-30T13:57:56.073959726Z" level=info msg="RemoveContainer for \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\" returns successfully" Jan 30 13:57:56.078782 kubelet[2781]: I0130 13:57:56.078763 2781 scope.go:117] "RemoveContainer" containerID="34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739" Jan 30 13:57:56.083726 containerd[1541]: time="2025-01-30T13:57:56.079148201Z" level=error msg="ContainerStatus for \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\": not found" Jan 30 13:57:56.095278 kubelet[2781]: E0130 13:57:56.095232 2781 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\": not found" containerID="34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739" Jan 30 13:57:56.095278 kubelet[2781]: I0130 13:57:56.095255 2781 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739"} err="failed to get container status \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\": rpc error: code = NotFound desc = an error occurred when try to find container \"34e3a4904204209c935c5e6e80f648100805b2e376fd7d1750995fcf907c8739\": not found" Jan 30 13:57:56.514800 kubelet[2781]: I0130 13:57:56.514584 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:56.681931 systemd[1]: var-lib-kubelet-pods-ecd4f075\x2d4727\x2d480c\x2d95e0\x2d1f433844e122-volumes-kubernetes.io\x7esecret-typha\x2dcerts.mount: Deactivated successfully. Jan 30 13:57:57.244091 kubelet[2781]: I0130 13:57:57.244060 2781 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 30 13:57:57.534294 kubelet[2781]: I0130 13:57:57.534079 2781 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ecd4f075-4727-480c-95e0-1f433844e122" path="/var/lib/kubelet/pods/ecd4f075-4727-480c-95e0-1f433844e122/volumes" Jan 30 13:57:58.721085 systemd[1]: Started sshd@12-139.178.70.103:22-139.178.68.195:46458.service - OpenSSH per-connection server daemon (139.178.68.195:46458). Jan 30 13:57:58.770531 sshd[5953]: Accepted publickey for core from 139.178.68.195 port 46458 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:57:58.771958 sshd[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:57:58.775318 systemd-logind[1518]: New session 15 of user core. Jan 30 13:57:58.781386 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 30 13:57:58.887138 sshd[5953]: pam_unix(sshd:session): session closed for user core Jan 30 13:57:58.889473 systemd[1]: sshd@12-139.178.70.103:22-139.178.68.195:46458.service: Deactivated successfully. Jan 30 13:57:58.891066 systemd[1]: session-15.scope: Deactivated successfully. Jan 30 13:57:58.892480 systemd-logind[1518]: Session 15 logged out. Waiting for processes to exit. Jan 30 13:57:58.893639 systemd-logind[1518]: Removed session 15. Jan 30 13:58:03.894694 systemd[1]: Started sshd@13-139.178.70.103:22-139.178.68.195:46460.service - OpenSSH per-connection server daemon (139.178.68.195:46460). Jan 30 13:58:03.923335 sshd[6069]: Accepted publickey for core from 139.178.68.195 port 46460 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:03.924270 sshd[6069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:03.926570 systemd-logind[1518]: New session 16 of user core. Jan 30 13:58:03.932308 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 30 13:58:04.034696 sshd[6069]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:04.036242 systemd-logind[1518]: Session 16 logged out. Waiting for processes to exit. Jan 30 13:58:04.036420 systemd[1]: sshd@13-139.178.70.103:22-139.178.68.195:46460.service: Deactivated successfully. Jan 30 13:58:04.037512 systemd[1]: session-16.scope: Deactivated successfully. Jan 30 13:58:04.038463 systemd-logind[1518]: Removed session 16. Jan 30 13:58:09.044230 systemd[1]: Started sshd@14-139.178.70.103:22-139.178.68.195:53600.service - OpenSSH per-connection server daemon (139.178.68.195:53600). Jan 30 13:58:09.107366 sshd[6183]: Accepted publickey for core from 139.178.68.195 port 53600 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:09.108699 sshd[6183]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:09.114799 systemd-logind[1518]: New session 17 of user core. Jan 30 13:58:09.126435 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 30 13:58:09.229996 sshd[6183]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:09.232395 systemd[1]: sshd@14-139.178.70.103:22-139.178.68.195:53600.service: Deactivated successfully. Jan 30 13:58:09.234012 systemd[1]: session-17.scope: Deactivated successfully. Jan 30 13:58:09.234609 systemd-logind[1518]: Session 17 logged out. Waiting for processes to exit. Jan 30 13:58:09.235269 systemd-logind[1518]: Removed session 17. Jan 30 13:58:14.238947 systemd[1]: Started sshd@15-139.178.70.103:22-139.178.68.195:53608.service - OpenSSH per-connection server daemon (139.178.68.195:53608). Jan 30 13:58:14.445700 sshd[6320]: Accepted publickey for core from 139.178.68.195 port 53608 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:14.446559 sshd[6320]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:14.449079 systemd-logind[1518]: New session 18 of user core. Jan 30 13:58:14.458340 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 30 13:58:14.689339 sshd[6320]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:14.696277 systemd[1]: sshd@15-139.178.70.103:22-139.178.68.195:53608.service: Deactivated successfully. Jan 30 13:58:14.697760 systemd[1]: session-18.scope: Deactivated successfully. Jan 30 13:58:14.699167 systemd-logind[1518]: Session 18 logged out. Waiting for processes to exit. Jan 30 13:58:14.703735 systemd[1]: Started sshd@16-139.178.70.103:22-139.178.68.195:42488.service - OpenSSH per-connection server daemon (139.178.68.195:42488). Jan 30 13:58:14.704710 systemd-logind[1518]: Removed session 18. Jan 30 13:58:14.729052 sshd[6342]: Accepted publickey for core from 139.178.68.195 port 42488 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:14.729958 sshd[6342]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:14.733460 systemd-logind[1518]: New session 19 of user core. Jan 30 13:58:14.746312 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 30 13:58:15.589553 sshd[6342]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:15.595202 systemd[1]: Started sshd@17-139.178.70.103:22-139.178.68.195:42490.service - OpenSSH per-connection server daemon (139.178.68.195:42490). Jan 30 13:58:15.599134 systemd[1]: sshd@16-139.178.70.103:22-139.178.68.195:42488.service: Deactivated successfully. Jan 30 13:58:15.600727 systemd[1]: session-19.scope: Deactivated successfully. Jan 30 13:58:15.601331 systemd-logind[1518]: Session 19 logged out. Waiting for processes to exit. Jan 30 13:58:15.601940 systemd-logind[1518]: Removed session 19. Jan 30 13:58:15.755645 sshd[6370]: Accepted publickey for core from 139.178.68.195 port 42490 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:15.756569 sshd[6370]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:15.759699 systemd-logind[1518]: New session 20 of user core. Jan 30 13:58:15.769312 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 30 13:58:17.836382 sshd[6370]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:17.843012 systemd[1]: Started sshd@18-139.178.70.103:22-139.178.68.195:42506.service - OpenSSH per-connection server daemon (139.178.68.195:42506). Jan 30 13:58:17.847087 systemd[1]: sshd@17-139.178.70.103:22-139.178.68.195:42490.service: Deactivated successfully. Jan 30 13:58:17.850183 systemd[1]: session-20.scope: Deactivated successfully. Jan 30 13:58:17.852584 systemd-logind[1518]: Session 20 logged out. Waiting for processes to exit. Jan 30 13:58:17.854074 systemd-logind[1518]: Removed session 20. Jan 30 13:58:17.921614 sshd[6463]: Accepted publickey for core from 139.178.68.195 port 42506 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:17.921888 sshd[6463]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:17.927628 systemd-logind[1518]: New session 21 of user core. Jan 30 13:58:17.936395 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 30 13:58:18.460281 sshd[6463]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:18.466000 systemd[1]: sshd@18-139.178.70.103:22-139.178.68.195:42506.service: Deactivated successfully. Jan 30 13:58:18.467208 systemd[1]: session-21.scope: Deactivated successfully. Jan 30 13:58:18.468304 systemd-logind[1518]: Session 21 logged out. Waiting for processes to exit. Jan 30 13:58:18.472524 systemd[1]: Started sshd@19-139.178.70.103:22-139.178.68.195:42520.service - OpenSSH per-connection server daemon (139.178.68.195:42520). Jan 30 13:58:18.473538 systemd-logind[1518]: Removed session 21. Jan 30 13:58:18.513531 sshd[6483]: Accepted publickey for core from 139.178.68.195 port 42520 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:18.514416 sshd[6483]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:18.517614 systemd-logind[1518]: New session 22 of user core. Jan 30 13:58:18.520350 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 30 13:58:18.632782 sshd[6483]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:18.634474 systemd[1]: sshd@19-139.178.70.103:22-139.178.68.195:42520.service: Deactivated successfully. Jan 30 13:58:18.636077 systemd[1]: session-22.scope: Deactivated successfully. Jan 30 13:58:18.637872 systemd-logind[1518]: Session 22 logged out. Waiting for processes to exit. Jan 30 13:58:18.638779 systemd-logind[1518]: Removed session 22. Jan 30 13:58:23.641290 systemd[1]: Started sshd@20-139.178.70.103:22-139.178.68.195:42532.service - OpenSSH per-connection server daemon (139.178.68.195:42532). Jan 30 13:58:23.803208 sshd[6591]: Accepted publickey for core from 139.178.68.195 port 42532 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:23.809862 sshd[6591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:23.819988 systemd-logind[1518]: New session 23 of user core. Jan 30 13:58:23.826317 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 30 13:58:23.990361 sshd[6591]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:23.992190 systemd[1]: sshd@20-139.178.70.103:22-139.178.68.195:42532.service: Deactivated successfully. Jan 30 13:58:23.993438 systemd[1]: session-23.scope: Deactivated successfully. Jan 30 13:58:23.994334 systemd-logind[1518]: Session 23 logged out. Waiting for processes to exit. Jan 30 13:58:23.994941 systemd-logind[1518]: Removed session 23. Jan 30 13:58:28.997839 systemd[1]: Started sshd@21-139.178.70.103:22-139.178.68.195:47846.service - OpenSSH per-connection server daemon (139.178.68.195:47846). Jan 30 13:58:29.042893 sshd[6713]: Accepted publickey for core from 139.178.68.195 port 47846 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:29.043923 sshd[6713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:29.046922 systemd-logind[1518]: New session 24 of user core. Jan 30 13:58:29.053384 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 30 13:58:29.113891 kubelet[2781]: I0130 13:58:29.113861 2781 scope.go:117] "RemoveContainer" containerID="0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5" Jan 30 13:58:29.125734 containerd[1541]: time="2025-01-30T13:58:29.120165689Z" level=info msg="RemoveContainer for \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\"" Jan 30 13:58:29.129799 containerd[1541]: time="2025-01-30T13:58:29.129768319Z" level=info msg="RemoveContainer for \"0cf49a45e7485c5369704a8586bf3cf7050df04ed370c86ab135528e697350f5\" returns successfully" Jan 30 13:58:29.130852 containerd[1541]: time="2025-01-30T13:58:29.130831224Z" level=info msg="StopPodSandbox for \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\"" Jan 30 13:58:29.200171 sshd[6713]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:29.202249 systemd-logind[1518]: Session 24 logged out. Waiting for processes to exit. Jan 30 13:58:29.202659 systemd[1]: sshd@21-139.178.70.103:22-139.178.68.195:47846.service: Deactivated successfully. Jan 30 13:58:29.203829 systemd[1]: session-24.scope: Deactivated successfully. Jan 30 13:58:29.204457 systemd-logind[1518]: Removed session 24. Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.389 [WARNING][6734] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.390 [INFO][6734] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.390 [INFO][6734] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" iface="eth0" netns="" Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.390 [INFO][6734] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.390 [INFO][6734] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.452 [INFO][6742] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.452 [INFO][6742] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.452 [INFO][6742] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.458 [WARNING][6742] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.458 [INFO][6742] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.459 [INFO][6742] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:29.461595 containerd[1541]: 2025-01-30 13:58:29.460 [INFO][6734] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:58:29.465601 containerd[1541]: time="2025-01-30T13:58:29.465573483Z" level=info msg="TearDown network for sandbox \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\" successfully" Jan 30 13:58:29.465694 containerd[1541]: time="2025-01-30T13:58:29.465683686Z" level=info msg="StopPodSandbox for \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\" returns successfully" Jan 30 13:58:29.466143 containerd[1541]: time="2025-01-30T13:58:29.466125103Z" level=info msg="RemovePodSandbox for \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\"" Jan 30 13:58:29.470786 containerd[1541]: time="2025-01-30T13:58:29.470770433Z" level=info msg="Forcibly stopping sandbox \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\"" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.499 [WARNING][6760] cni-plugin/k8s.go 566: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.499 [INFO][6760] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.499 [INFO][6760] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" iface="eth0" netns="" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.499 [INFO][6760] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.499 [INFO][6760] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.513 [INFO][6767] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.513 [INFO][6767] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.513 [INFO][6767] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.518 [WARNING][6767] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.518 [INFO][6767] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" HandleID="k8s-pod-network.9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Workload="localhost-k8s-calico--kube--controllers--768b4d69bb--4xhph-eth0" Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.519 [INFO][6767] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 30 13:58:29.522303 containerd[1541]: 2025-01-30 13:58:29.521 [INFO][6760] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658" Jan 30 13:58:29.524086 containerd[1541]: time="2025-01-30T13:58:29.522333311Z" level=info msg="TearDown network for sandbox \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\" successfully" Jan 30 13:58:29.571359 containerd[1541]: time="2025-01-30T13:58:29.571325740Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:29.571487 containerd[1541]: time="2025-01-30T13:58:29.571381513Z" level=info msg="RemovePodSandbox \"9df07dee17225ade1ee7fd94b63a025f95281c7857787ba1d6922b0db3ed3658\" returns successfully" Jan 30 13:58:29.575322 containerd[1541]: time="2025-01-30T13:58:29.571784478Z" level=info msg="StopPodSandbox for \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\"" Jan 30 13:58:29.575322 containerd[1541]: time="2025-01-30T13:58:29.571840190Z" level=info msg="TearDown network for sandbox \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\" successfully" Jan 30 13:58:29.575322 containerd[1541]: time="2025-01-30T13:58:29.571849201Z" level=info msg="StopPodSandbox for \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\" returns successfully" Jan 30 13:58:29.575322 containerd[1541]: time="2025-01-30T13:58:29.571995500Z" level=info msg="RemovePodSandbox for \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\"" Jan 30 13:58:29.575322 containerd[1541]: time="2025-01-30T13:58:29.572009857Z" level=info msg="Forcibly stopping sandbox \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\"" Jan 30 13:58:29.575322 containerd[1541]: time="2025-01-30T13:58:29.572047928Z" level=info msg="TearDown network for sandbox \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\" successfully" Jan 30 13:58:29.589509 containerd[1541]: time="2025-01-30T13:58:29.589492299Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 30 13:58:29.589665 containerd[1541]: time="2025-01-30T13:58:29.589522863Z" level=info msg="RemovePodSandbox \"aa2e9dc8992692dae98e07501a5bb9efe240075a12f77cdec37ece1413c886ed\" returns successfully" Jan 30 13:58:34.209753 systemd[1]: Started sshd@22-139.178.70.103:22-139.178.68.195:47854.service - OpenSSH per-connection server daemon (139.178.68.195:47854). Jan 30 13:58:34.263755 sshd[6861]: Accepted publickey for core from 139.178.68.195 port 47854 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:34.264692 sshd[6861]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:34.267197 systemd-logind[1518]: New session 25 of user core. Jan 30 13:58:34.280406 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 30 13:58:34.596269 sshd[6861]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:34.597949 systemd[1]: sshd@22-139.178.70.103:22-139.178.68.195:47854.service: Deactivated successfully. Jan 30 13:58:34.599209 systemd[1]: session-25.scope: Deactivated successfully. Jan 30 13:58:34.600424 systemd-logind[1518]: Session 25 logged out. Waiting for processes to exit. Jan 30 13:58:34.601057 systemd-logind[1518]: Removed session 25. Jan 30 13:58:39.608320 systemd[1]: Started sshd@23-139.178.70.103:22-139.178.68.195:47888.service - OpenSSH per-connection server daemon (139.178.68.195:47888). Jan 30 13:58:39.659849 sshd[6966]: Accepted publickey for core from 139.178.68.195 port 47888 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:39.660890 sshd[6966]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:39.664175 systemd-logind[1518]: New session 26 of user core. Jan 30 13:58:39.669345 systemd[1]: Started session-26.scope - Session 26 of User core. Jan 30 13:58:39.809048 sshd[6966]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:39.812093 systemd[1]: sshd@23-139.178.70.103:22-139.178.68.195:47888.service: Deactivated successfully. Jan 30 13:58:39.813958 systemd[1]: session-26.scope: Deactivated successfully. Jan 30 13:58:39.814994 systemd-logind[1518]: Session 26 logged out. Waiting for processes to exit. Jan 30 13:58:39.816158 systemd-logind[1518]: Removed session 26. Jan 30 13:58:44.819137 systemd[1]: Started sshd@24-139.178.70.103:22-139.178.68.195:54018.service - OpenSSH per-connection server daemon (139.178.68.195:54018). Jan 30 13:58:45.753858 sshd[7096]: Accepted publickey for core from 139.178.68.195 port 54018 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:45.760963 sshd[7096]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:45.775702 systemd-logind[1518]: New session 27 of user core. Jan 30 13:58:45.780385 systemd[1]: Started session-27.scope - Session 27 of User core. Jan 30 13:58:46.026271 sshd[7096]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:46.028983 systemd-logind[1518]: Session 27 logged out. Waiting for processes to exit. Jan 30 13:58:46.029725 systemd[1]: sshd@24-139.178.70.103:22-139.178.68.195:54018.service: Deactivated successfully. Jan 30 13:58:46.031346 systemd[1]: session-27.scope: Deactivated successfully. Jan 30 13:58:46.032578 systemd-logind[1518]: Removed session 27. Jan 30 13:58:51.033445 systemd[1]: Started sshd@25-139.178.70.103:22-139.178.68.195:54020.service - OpenSSH per-connection server daemon (139.178.68.195:54020). Jan 30 13:58:51.092563 sshd[7247]: Accepted publickey for core from 139.178.68.195 port 54020 ssh2: RSA SHA256:6nbEnXEl+18uydVNSXgyuQlkvzGTWxQuELikT+hTs2E Jan 30 13:58:51.092913 sshd[7247]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 30 13:58:51.096094 systemd-logind[1518]: New session 28 of user core. Jan 30 13:58:51.101336 systemd[1]: Started session-28.scope - Session 28 of User core. Jan 30 13:58:51.192500 sshd[7247]: pam_unix(sshd:session): session closed for user core Jan 30 13:58:51.196088 systemd[1]: sshd@25-139.178.70.103:22-139.178.68.195:54020.service: Deactivated successfully. Jan 30 13:58:51.198019 systemd[1]: session-28.scope: Deactivated successfully. Jan 30 13:58:51.198648 systemd-logind[1518]: Session 28 logged out. Waiting for processes to exit. Jan 30 13:58:51.199346 systemd-logind[1518]: Removed session 28.