Jan 29 11:26:40.735345 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:26:40.735362 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:26:40.735368 kernel: Disabled fast string operations Jan 29 11:26:40.735372 kernel: BIOS-provided physical RAM map: Jan 29 11:26:40.735376 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 29 11:26:40.735380 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 29 11:26:40.735386 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 29 11:26:40.735391 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 29 11:26:40.735395 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 29 11:26:40.735399 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 29 11:26:40.735403 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 29 11:26:40.735407 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 29 11:26:40.735411 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 29 11:26:40.735416 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 11:26:40.735422 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 29 11:26:40.735427 kernel: NX (Execute Disable) protection: active Jan 29 11:26:40.735432 kernel: APIC: Static calls initialized Jan 29 11:26:40.735436 kernel: SMBIOS 2.7 present. Jan 29 11:26:40.735441 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 29 11:26:40.735446 kernel: vmware: hypercall mode: 0x00 Jan 29 11:26:40.735451 kernel: Hypervisor detected: VMware Jan 29 11:26:40.735455 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 29 11:26:40.735461 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 29 11:26:40.735466 kernel: vmware: using clock offset of 2745653214 ns Jan 29 11:26:40.735471 kernel: tsc: Detected 3408.000 MHz processor Jan 29 11:26:40.735476 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:26:40.735481 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:26:40.735486 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 29 11:26:40.735491 kernel: total RAM covered: 3072M Jan 29 11:26:40.735495 kernel: Found optimal setting for mtrr clean up Jan 29 11:26:40.735501 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 29 11:26:40.735507 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 29 11:26:40.735511 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:26:40.735516 kernel: Using GB pages for direct mapping Jan 29 11:26:40.735521 kernel: ACPI: Early table checksum verification disabled Jan 29 11:26:40.735542 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 29 11:26:40.735547 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 29 11:26:40.735552 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 29 11:26:40.735556 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 29 11:26:40.735561 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 11:26:40.735568 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 11:26:40.735573 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 29 11:26:40.735578 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 29 11:26:40.735584 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 29 11:26:40.735588 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 29 11:26:40.735595 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 29 11:26:40.735600 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 29 11:26:40.735605 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 29 11:26:40.735610 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 29 11:26:40.735615 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 11:26:40.735619 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 11:26:40.735624 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 29 11:26:40.735630 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 29 11:26:40.735634 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 29 11:26:40.735639 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 29 11:26:40.735645 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 29 11:26:40.735650 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 29 11:26:40.735665 kernel: system APIC only can use physical flat Jan 29 11:26:40.735671 kernel: APIC: Switched APIC routing to: physical flat Jan 29 11:26:40.735676 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:26:40.735681 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 11:26:40.735686 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 11:26:40.735691 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 11:26:40.735696 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 11:26:40.735703 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 11:26:40.735708 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 11:26:40.735713 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 11:26:40.735718 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 29 11:26:40.735723 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 29 11:26:40.735727 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 29 11:26:40.735732 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 29 11:26:40.735737 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 29 11:26:40.735742 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 29 11:26:40.735747 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 29 11:26:40.735753 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 29 11:26:40.735758 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 29 11:26:40.735763 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 29 11:26:40.735773 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 29 11:26:40.735778 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 29 11:26:40.735783 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 29 11:26:40.735788 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 29 11:26:40.735793 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 29 11:26:40.735798 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 29 11:26:40.735803 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 29 11:26:40.735808 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 29 11:26:40.735814 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 29 11:26:40.735819 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 29 11:26:40.735824 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 29 11:26:40.735829 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 29 11:26:40.735834 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 29 11:26:40.735839 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 29 11:26:40.735844 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 29 11:26:40.735848 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 29 11:26:40.735853 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 29 11:26:40.735858 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 29 11:26:40.735864 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 29 11:26:40.735869 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 29 11:26:40.735874 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 29 11:26:40.735879 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 29 11:26:40.735884 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 29 11:26:40.735889 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 29 11:26:40.735893 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 29 11:26:40.735898 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 29 11:26:40.735903 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 29 11:26:40.735908 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 29 11:26:40.735914 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 29 11:26:40.735919 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 29 11:26:40.735923 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 29 11:26:40.735928 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 29 11:26:40.735933 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 29 11:26:40.735938 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 29 11:26:40.735943 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 29 11:26:40.735947 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 29 11:26:40.735953 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 29 11:26:40.735957 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 29 11:26:40.735963 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 29 11:26:40.735968 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 29 11:26:40.735973 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 29 11:26:40.735982 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 29 11:26:40.735988 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 29 11:26:40.735993 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 29 11:26:40.735998 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 29 11:26:40.736003 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 29 11:26:40.736009 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 29 11:26:40.736015 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 29 11:26:40.736020 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 29 11:26:40.736025 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 29 11:26:40.736030 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 29 11:26:40.736035 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 29 11:26:40.736041 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 29 11:26:40.736046 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 29 11:26:40.736051 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 29 11:26:40.736056 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 29 11:26:40.736061 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 29 11:26:40.736068 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 29 11:26:40.736073 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 29 11:26:40.736078 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 29 11:26:40.736084 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 29 11:26:40.736089 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 29 11:26:40.736094 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 29 11:26:40.736099 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 29 11:26:40.736104 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 29 11:26:40.736109 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 29 11:26:40.736115 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 29 11:26:40.736121 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 29 11:26:40.736126 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 29 11:26:40.736131 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 29 11:26:40.736137 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 29 11:26:40.736142 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 29 11:26:40.736147 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 29 11:26:40.736152 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 29 11:26:40.736157 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 29 11:26:40.736162 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 29 11:26:40.736168 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 29 11:26:40.736173 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 29 11:26:40.736179 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 29 11:26:40.736184 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 29 11:26:40.736189 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 29 11:26:40.736195 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 29 11:26:40.736200 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 29 11:26:40.736205 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 29 11:26:40.736210 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 29 11:26:40.736215 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 29 11:26:40.736221 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 29 11:26:40.736226 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 29 11:26:40.736232 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 29 11:26:40.736237 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 29 11:26:40.736243 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 29 11:26:40.736248 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 29 11:26:40.736253 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 29 11:26:40.736258 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 29 11:26:40.736263 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 29 11:26:40.736268 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 29 11:26:40.736274 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 29 11:26:40.736279 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 29 11:26:40.736285 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 29 11:26:40.736290 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 29 11:26:40.736295 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 29 11:26:40.736300 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 29 11:26:40.736306 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 29 11:26:40.736311 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 29 11:26:40.736316 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 29 11:26:40.736321 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 29 11:26:40.736326 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 29 11:26:40.736332 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 29 11:26:40.736337 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 29 11:26:40.736343 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 29 11:26:40.736348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:26:40.736354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:26:40.736359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 29 11:26:40.736364 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 29 11:26:40.736370 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 29 11:26:40.736375 kernel: Zone ranges: Jan 29 11:26:40.736381 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:26:40.736386 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 29 11:26:40.736393 kernel: Normal empty Jan 29 11:26:40.736398 kernel: Movable zone start for each node Jan 29 11:26:40.736403 kernel: Early memory node ranges Jan 29 11:26:40.736409 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 29 11:26:40.736414 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 29 11:26:40.736419 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 29 11:26:40.736425 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 29 11:26:40.736430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:26:40.736435 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 29 11:26:40.736441 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 29 11:26:40.736447 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 29 11:26:40.736452 kernel: system APIC only can use physical flat Jan 29 11:26:40.736457 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 29 11:26:40.736463 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 11:26:40.736468 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 11:26:40.736473 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 11:26:40.736478 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 11:26:40.736484 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 11:26:40.736489 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 11:26:40.736495 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 11:26:40.736501 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 11:26:40.736506 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 11:26:40.736511 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 11:26:40.736516 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 11:26:40.736521 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 11:26:40.736527 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 11:26:40.736532 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 11:26:40.736537 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 11:26:40.736542 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 11:26:40.736549 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 29 11:26:40.736554 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 29 11:26:40.736559 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 29 11:26:40.736564 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 29 11:26:40.736569 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 29 11:26:40.736575 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 29 11:26:40.736580 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 29 11:26:40.736585 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 29 11:26:40.736590 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 29 11:26:40.736596 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 29 11:26:40.736602 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 29 11:26:40.736607 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 29 11:26:40.736612 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 29 11:26:40.736617 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 29 11:26:40.736623 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 29 11:26:40.736628 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 29 11:26:40.736633 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 29 11:26:40.736638 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 29 11:26:40.736644 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 29 11:26:40.736650 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 29 11:26:40.736975 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 29 11:26:40.738958 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 29 11:26:40.738966 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 29 11:26:40.738972 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 29 11:26:40.738977 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 29 11:26:40.738983 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 29 11:26:40.738988 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 29 11:26:40.738993 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 29 11:26:40.738998 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 29 11:26:40.739006 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 29 11:26:40.739012 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 29 11:26:40.739017 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 29 11:26:40.739022 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 29 11:26:40.739027 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 29 11:26:40.739033 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 29 11:26:40.739038 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 29 11:26:40.739043 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 29 11:26:40.739048 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 29 11:26:40.739055 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 29 11:26:40.739060 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 29 11:26:40.739065 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 29 11:26:40.739070 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 29 11:26:40.739076 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 29 11:26:40.739081 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 29 11:26:40.739086 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 29 11:26:40.739091 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 29 11:26:40.739097 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 29 11:26:40.739102 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 29 11:26:40.739108 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 29 11:26:40.739114 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 29 11:26:40.739119 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 29 11:26:40.739124 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 29 11:26:40.739129 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 29 11:26:40.739134 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 29 11:26:40.739140 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 29 11:26:40.739145 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 29 11:26:40.739150 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 29 11:26:40.739155 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 29 11:26:40.739162 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 29 11:26:40.739167 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 29 11:26:40.739172 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 29 11:26:40.739178 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 29 11:26:40.739183 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 29 11:26:40.739188 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 29 11:26:40.739193 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 29 11:26:40.739198 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 29 11:26:40.739203 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 29 11:26:40.739209 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 29 11:26:40.739215 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 29 11:26:40.739220 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 29 11:26:40.739226 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 29 11:26:40.739231 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 29 11:26:40.739236 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 29 11:26:40.739241 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 29 11:26:40.739247 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 29 11:26:40.739252 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 29 11:26:40.739257 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 29 11:26:40.739263 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 29 11:26:40.739268 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 29 11:26:40.739274 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 29 11:26:40.739279 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 29 11:26:40.739284 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 29 11:26:40.739289 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 29 11:26:40.739295 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 29 11:26:40.739300 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 29 11:26:40.739305 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 29 11:26:40.739310 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 29 11:26:40.739316 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 29 11:26:40.739322 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 29 11:26:40.739327 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 29 11:26:40.739332 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 29 11:26:40.739337 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 29 11:26:40.739343 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 29 11:26:40.739348 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 29 11:26:40.739353 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 29 11:26:40.739358 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 29 11:26:40.739365 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 29 11:26:40.739370 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 29 11:26:40.739376 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 29 11:26:40.739381 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 29 11:26:40.739386 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 29 11:26:40.739392 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 29 11:26:40.739397 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 29 11:26:40.739402 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 29 11:26:40.739407 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 29 11:26:40.739412 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 29 11:26:40.739419 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 29 11:26:40.739424 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 29 11:26:40.739429 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 29 11:26:40.739435 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 29 11:26:40.739440 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 29 11:26:40.739446 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:26:40.739451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 29 11:26:40.739456 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:26:40.739462 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 29 11:26:40.739467 kernel: TSC deadline timer available Jan 29 11:26:40.739474 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 29 11:26:40.739479 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 29 11:26:40.739484 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 29 11:26:40.739490 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:26:40.739495 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 29 11:26:40.739501 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 11:26:40.739507 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 11:26:40.739512 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 29 11:26:40.739517 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 29 11:26:40.739523 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 29 11:26:40.739529 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 29 11:26:40.739534 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 29 11:26:40.739547 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 29 11:26:40.739553 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 29 11:26:40.739559 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 29 11:26:40.739564 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 29 11:26:40.739570 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 29 11:26:40.739577 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 29 11:26:40.739582 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 29 11:26:40.739588 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 29 11:26:40.739593 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 29 11:26:40.739599 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 29 11:26:40.739605 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 29 11:26:40.739611 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:26:40.739617 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:26:40.739624 kernel: random: crng init done Jan 29 11:26:40.739630 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 29 11:26:40.739635 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 29 11:26:40.739641 kernel: printk: log_buf_len min size: 262144 bytes Jan 29 11:26:40.739646 kernel: printk: log_buf_len: 1048576 bytes Jan 29 11:26:40.739652 kernel: printk: early log buf free: 239648(91%) Jan 29 11:26:40.739668 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:26:40.739674 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:26:40.739680 kernel: Fallback order for Node 0: 0 Jan 29 11:26:40.739688 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 29 11:26:40.739694 kernel: Policy zone: DMA32 Jan 29 11:26:40.739699 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:26:40.739705 kernel: Memory: 1936352K/2096628K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 160016K reserved, 0K cma-reserved) Jan 29 11:26:40.739712 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 29 11:26:40.739718 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:26:40.739725 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:26:40.739730 kernel: Dynamic Preempt: voluntary Jan 29 11:26:40.739736 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:26:40.739742 kernel: rcu: RCU event tracing is enabled. Jan 29 11:26:40.739748 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 29 11:26:40.739754 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:26:40.739760 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:26:40.739765 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:26:40.739771 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:26:40.739778 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 29 11:26:40.739784 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 29 11:26:40.739789 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 29 11:26:40.739795 kernel: Console: colour VGA+ 80x25 Jan 29 11:26:40.739800 kernel: printk: console [tty0] enabled Jan 29 11:26:40.739806 kernel: printk: console [ttyS0] enabled Jan 29 11:26:40.739812 kernel: ACPI: Core revision 20230628 Jan 29 11:26:40.739818 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 29 11:26:40.739824 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:26:40.739829 kernel: x2apic enabled Jan 29 11:26:40.739836 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:26:40.739842 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:26:40.739848 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 11:26:40.739853 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 29 11:26:40.739859 kernel: Disabled fast string operations Jan 29 11:26:40.739865 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:26:40.739871 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:26:40.739876 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:26:40.739886 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 11:26:40.739893 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 11:26:40.739899 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 11:26:40.739904 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:26:40.739910 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 11:26:40.739916 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 11:26:40.739922 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:26:40.739928 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:26:40.739933 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:26:40.739941 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 11:26:40.739946 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:26:40.739952 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:26:40.739958 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:26:40.739963 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:26:40.739969 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:26:40.739975 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:26:40.739980 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:26:40.739986 kernel: pid_max: default: 131072 minimum: 1024 Jan 29 11:26:40.739993 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:26:40.739999 kernel: landlock: Up and running. Jan 29 11:26:40.740004 kernel: SELinux: Initializing. Jan 29 11:26:40.740010 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:26:40.740016 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:26:40.740022 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 11:26:40.740028 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:26:40.740033 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:26:40.740039 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:26:40.740046 kernel: Performance Events: Skylake events, core PMU driver. Jan 29 11:26:40.740052 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 29 11:26:40.740058 kernel: core: CPUID marked event: 'instructions' unavailable Jan 29 11:26:40.740063 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 29 11:26:40.740069 kernel: core: CPUID marked event: 'cache references' unavailable Jan 29 11:26:40.740074 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 29 11:26:40.740080 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 29 11:26:40.740087 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 29 11:26:40.740093 kernel: ... version: 1 Jan 29 11:26:40.740099 kernel: ... bit width: 48 Jan 29 11:26:40.740105 kernel: ... generic registers: 4 Jan 29 11:26:40.740110 kernel: ... value mask: 0000ffffffffffff Jan 29 11:26:40.740116 kernel: ... max period: 000000007fffffff Jan 29 11:26:40.740122 kernel: ... fixed-purpose events: 0 Jan 29 11:26:40.740127 kernel: ... event mask: 000000000000000f Jan 29 11:26:40.740133 kernel: signal: max sigframe size: 1776 Jan 29 11:26:40.740138 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:26:40.740144 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:26:40.740151 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:26:40.740157 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:26:40.740163 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:26:40.740168 kernel: .... node #0, CPUs: #1 Jan 29 11:26:40.740174 kernel: Disabled fast string operations Jan 29 11:26:40.740180 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 29 11:26:40.740185 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 11:26:40.740191 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:26:40.740197 kernel: smpboot: Max logical packages: 128 Jan 29 11:26:40.740202 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 29 11:26:40.740210 kernel: devtmpfs: initialized Jan 29 11:26:40.740216 kernel: x86/mm: Memory block size: 128MB Jan 29 11:26:40.740221 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 29 11:26:40.740227 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:26:40.740233 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 29 11:26:40.740239 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:26:40.740244 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:26:40.740250 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:26:40.740279 kernel: audit: type=2000 audit(1738149999.066:1): state=initialized audit_enabled=0 res=1 Jan 29 11:26:40.740286 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:26:40.740292 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:26:40.740298 kernel: cpuidle: using governor menu Jan 29 11:26:40.740320 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 29 11:26:40.740326 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:26:40.740332 kernel: dca service started, version 1.12.1 Jan 29 11:26:40.740337 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 29 11:26:40.740343 kernel: PCI: Using configuration type 1 for base access Jan 29 11:26:40.740350 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:26:40.740356 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:26:40.740361 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:26:40.740367 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:26:40.740373 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:26:40.740378 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:26:40.740384 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:26:40.740390 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:26:40.740395 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:26:40.740402 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:26:40.740408 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 29 11:26:40.740414 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:26:40.740419 kernel: ACPI: Interpreter enabled Jan 29 11:26:40.740425 kernel: ACPI: PM: (supports S0 S1 S5) Jan 29 11:26:40.740431 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:26:40.740436 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:26:40.740442 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:26:40.740448 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 29 11:26:40.740455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 29 11:26:40.740529 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:26:40.740584 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 29 11:26:40.740634 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 29 11:26:40.740643 kernel: PCI host bridge to bus 0000:00 Jan 29 11:26:40.740709 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:26:40.740755 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 29 11:26:40.740801 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:26:40.740845 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:26:40.740889 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 29 11:26:40.740932 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 29 11:26:40.740989 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 29 11:26:40.741043 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 29 11:26:40.741103 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 29 11:26:40.741157 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 29 11:26:40.741206 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 29 11:26:40.741254 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:26:40.741338 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:26:40.741386 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:26:40.741434 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:26:40.741490 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 29 11:26:40.741539 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 29 11:26:40.741606 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 29 11:26:40.741913 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 29 11:26:40.741967 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 29 11:26:40.742018 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 29 11:26:40.742075 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 29 11:26:40.742124 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 29 11:26:40.742173 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 29 11:26:40.742221 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 29 11:26:40.742269 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 29 11:26:40.742318 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:26:40.742371 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 29 11:26:40.742427 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.742478 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.742531 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.742581 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.742634 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.743757 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.743823 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.743875 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.743929 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.743979 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744031 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744080 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744135 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744183 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744236 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744285 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744337 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744385 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744440 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744489 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744541 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744590 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744646 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744708 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744765 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744815 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744867 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744916 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744968 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745017 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745092 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745158 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745210 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745277 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745346 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745395 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745450 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745500 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745551 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745600 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.748668 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.748738 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.748795 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.748851 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.748911 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.748962 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749015 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749064 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749117 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749169 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749222 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749293 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749361 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749410 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749461 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749512 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749567 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749617 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749678 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749728 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749781 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749833 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749885 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749934 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749986 kernel: pci_bus 0000:01: extended config space not accessible Jan 29 11:26:40.750036 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:26:40.750087 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 11:26:40.750096 kernel: acpiphp: Slot [32] registered Jan 29 11:26:40.750104 kernel: acpiphp: Slot [33] registered Jan 29 11:26:40.750110 kernel: acpiphp: Slot [34] registered Jan 29 11:26:40.750115 kernel: acpiphp: Slot [35] registered Jan 29 11:26:40.750121 kernel: acpiphp: Slot [36] registered Jan 29 11:26:40.750127 kernel: acpiphp: Slot [37] registered Jan 29 11:26:40.750133 kernel: acpiphp: Slot [38] registered Jan 29 11:26:40.750138 kernel: acpiphp: Slot [39] registered Jan 29 11:26:40.750144 kernel: acpiphp: Slot [40] registered Jan 29 11:26:40.750150 kernel: acpiphp: Slot [41] registered Jan 29 11:26:40.750156 kernel: acpiphp: Slot [42] registered Jan 29 11:26:40.750162 kernel: acpiphp: Slot [43] registered Jan 29 11:26:40.750168 kernel: acpiphp: Slot [44] registered Jan 29 11:26:40.750174 kernel: acpiphp: Slot [45] registered Jan 29 11:26:40.750179 kernel: acpiphp: Slot [46] registered Jan 29 11:26:40.750185 kernel: acpiphp: Slot [47] registered Jan 29 11:26:40.750191 kernel: acpiphp: Slot [48] registered Jan 29 11:26:40.750196 kernel: acpiphp: Slot [49] registered Jan 29 11:26:40.750202 kernel: acpiphp: Slot [50] registered Jan 29 11:26:40.750209 kernel: acpiphp: Slot [51] registered Jan 29 11:26:40.750215 kernel: acpiphp: Slot [52] registered Jan 29 11:26:40.750220 kernel: acpiphp: Slot [53] registered Jan 29 11:26:40.750226 kernel: acpiphp: Slot [54] registered Jan 29 11:26:40.750232 kernel: acpiphp: Slot [55] registered Jan 29 11:26:40.750237 kernel: acpiphp: Slot [56] registered Jan 29 11:26:40.750243 kernel: acpiphp: Slot [57] registered Jan 29 11:26:40.750249 kernel: acpiphp: Slot [58] registered Jan 29 11:26:40.750258 kernel: acpiphp: Slot [59] registered Jan 29 11:26:40.750265 kernel: acpiphp: Slot [60] registered Jan 29 11:26:40.750273 kernel: acpiphp: Slot [61] registered Jan 29 11:26:40.750279 kernel: acpiphp: Slot [62] registered Jan 29 11:26:40.750285 kernel: acpiphp: Slot [63] registered Jan 29 11:26:40.750338 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 29 11:26:40.750388 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 11:26:40.750436 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 11:26:40.750484 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:26:40.750533 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 29 11:26:40.750584 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 29 11:26:40.750633 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 29 11:26:40.751736 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 29 11:26:40.751787 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 29 11:26:40.751842 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 29 11:26:40.751893 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 29 11:26:40.751943 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 29 11:26:40.751996 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 11:26:40.752045 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.752095 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 11:26:40.752144 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 11:26:40.752192 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 11:26:40.752241 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 11:26:40.752289 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 11:26:40.752338 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 11:26:40.752389 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 11:26:40.752438 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:26:40.752487 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 11:26:40.752535 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 11:26:40.752584 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 11:26:40.752633 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:26:40.752690 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 11:26:40.752741 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 11:26:40.752790 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:26:40.752838 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 11:26:40.752886 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 11:26:40.752934 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:26:40.752986 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 11:26:40.753035 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 11:26:40.753084 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:26:40.753133 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 11:26:40.753181 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 11:26:40.753230 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:26:40.753283 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 11:26:40.753332 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 11:26:40.753382 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:26:40.753437 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 29 11:26:40.753488 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 29 11:26:40.753538 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 29 11:26:40.753587 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 29 11:26:40.753636 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 29 11:26:40.755996 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 11:26:40.756055 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 29 11:26:40.756106 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:26:40.756155 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 11:26:40.756204 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 11:26:40.756252 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 11:26:40.756305 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 11:26:40.756355 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 11:26:40.756405 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 11:26:40.756455 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 11:26:40.756504 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:26:40.756554 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 11:26:40.756603 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 11:26:40.756651 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 11:26:40.758830 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:26:40.758883 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 11:26:40.758936 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 11:26:40.758986 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:26:40.759036 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 11:26:40.759086 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 11:26:40.759134 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:26:40.759183 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 11:26:40.759231 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 11:26:40.759298 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:26:40.759367 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 11:26:40.759416 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 11:26:40.759465 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:26:40.759514 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 11:26:40.759562 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 11:26:40.759610 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:26:40.759681 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 11:26:40.759731 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 11:26:40.759783 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 11:26:40.759831 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:26:40.759881 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 11:26:40.759929 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 11:26:40.759977 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 11:26:40.760024 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:26:40.760074 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 11:26:40.760122 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 11:26:40.760172 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 11:26:40.760220 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:26:40.760288 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 11:26:40.760353 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 11:26:40.760400 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:26:40.760449 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 11:26:40.760497 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 11:26:40.760544 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:26:40.760596 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 11:26:40.760677 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 11:26:40.760730 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:26:40.760779 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 11:26:40.760828 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 11:26:40.760876 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:26:40.760926 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 11:26:40.760975 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 11:26:40.761027 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:26:40.761076 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 11:26:40.761124 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 11:26:40.761172 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 11:26:40.761220 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:26:40.761292 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 11:26:40.761341 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 11:26:40.761393 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 11:26:40.761458 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:26:40.761508 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 11:26:40.761556 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 11:26:40.761605 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:26:40.761935 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 11:26:40.761998 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 11:26:40.762050 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:26:40.762104 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 11:26:40.762153 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 11:26:40.762202 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:26:40.762251 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 11:26:40.762300 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 11:26:40.762348 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:26:40.762397 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 11:26:40.762445 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 11:26:40.762495 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:26:40.762545 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 11:26:40.762593 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 11:26:40.762642 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:26:40.762650 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 29 11:26:40.762694 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 29 11:26:40.762701 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 29 11:26:40.762708 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:26:40.762713 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 29 11:26:40.762722 kernel: iommu: Default domain type: Translated Jan 29 11:26:40.762728 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:26:40.762734 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:26:40.762740 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:26:40.762746 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 29 11:26:40.762751 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 29 11:26:40.762806 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 29 11:26:40.762856 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 29 11:26:40.762903 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:26:40.762914 kernel: vgaarb: loaded Jan 29 11:26:40.762920 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 29 11:26:40.762926 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 29 11:26:40.762932 kernel: clocksource: Switched to clocksource tsc-early Jan 29 11:26:40.762938 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:26:40.762944 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:26:40.762950 kernel: pnp: PnP ACPI init Jan 29 11:26:40.763003 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 29 11:26:40.763051 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 29 11:26:40.763095 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 29 11:26:40.763143 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 29 11:26:40.763191 kernel: pnp 00:06: [dma 2] Jan 29 11:26:40.763238 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 29 11:26:40.763302 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 29 11:26:40.763364 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 29 11:26:40.763372 kernel: pnp: PnP ACPI: found 8 devices Jan 29 11:26:40.763379 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:26:40.763385 kernel: NET: Registered PF_INET protocol family Jan 29 11:26:40.763390 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:26:40.763396 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:26:40.763402 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:26:40.763408 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:26:40.763413 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:26:40.763421 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:26:40.763427 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:26:40.763433 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:26:40.763439 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:26:40.763444 kernel: NET: Registered PF_XDP protocol family Jan 29 11:26:40.763493 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 29 11:26:40.763543 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:26:40.763592 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:26:40.763644 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:26:40.763706 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:26:40.763758 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 29 11:26:40.763819 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 29 11:26:40.763873 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 29 11:26:40.763922 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 29 11:26:40.763975 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 29 11:26:40.764025 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 29 11:26:40.764074 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 29 11:26:40.764124 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 29 11:26:40.764173 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 29 11:26:40.764225 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 29 11:26:40.764279 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 29 11:26:40.764328 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 29 11:26:40.764376 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 29 11:26:40.764425 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 29 11:26:40.764474 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 29 11:26:40.764526 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 29 11:26:40.764575 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 29 11:26:40.764624 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 29 11:26:40.764706 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:26:40.764756 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:26:40.764805 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.764853 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.764905 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.764953 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765002 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765051 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765123 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765176 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765224 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765293 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765362 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765411 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765460 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765509 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765557 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765607 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765711 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765764 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765816 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765865 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765913 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765961 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766008 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766057 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766105 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766153 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766204 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766252 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766338 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766386 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766435 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766483 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766531 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766578 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766629 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766684 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766733 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766781 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766829 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766877 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766924 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766972 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767023 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767071 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767119 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767167 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767216 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767270 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767318 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767366 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767428 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767478 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767529 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767578 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767626 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767728 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767779 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767827 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767898 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767948 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767996 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768047 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768095 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768143 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768192 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768240 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768289 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768337 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768385 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768433 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768481 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768532 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768580 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768628 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768696 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768746 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768794 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768842 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768890 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768938 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768990 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.769038 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.769086 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.769135 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.769183 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:26:40.769232 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 29 11:26:40.769302 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 11:26:40.769367 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 11:26:40.769415 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:26:40.769471 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 29 11:26:40.769520 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 11:26:40.769568 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 11:26:40.769617 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 11:26:40.769683 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:26:40.769733 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 11:26:40.769782 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 11:26:40.769831 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 11:26:40.769883 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:26:40.769933 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 11:26:40.769981 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 11:26:40.770031 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 11:26:40.770080 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:26:40.770129 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 11:26:40.770177 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 11:26:40.770226 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:26:40.770279 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 11:26:40.770328 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 11:26:40.770378 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:26:40.770429 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 11:26:40.770480 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 11:26:40.770528 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:26:40.770576 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 11:26:40.770624 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 11:26:40.770713 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:26:40.770764 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 11:26:40.770812 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 11:26:40.770861 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:26:40.770911 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 29 11:26:40.770960 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 11:26:40.771009 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 11:26:40.771057 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 11:26:40.771105 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:26:40.771156 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 11:26:40.771205 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 11:26:40.771254 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 11:26:40.771302 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:26:40.771351 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 11:26:40.771399 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 11:26:40.771446 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 11:26:40.771495 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:26:40.771543 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 11:26:40.771594 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 11:26:40.771642 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:26:40.771718 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 11:26:40.771767 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 11:26:40.771814 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:26:40.771862 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 11:26:40.771910 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 11:26:40.771958 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:26:40.772006 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 11:26:40.772054 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 11:26:40.772105 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:26:40.772153 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 11:26:40.772202 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 11:26:40.772250 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:26:40.772335 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 11:26:40.772383 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 11:26:40.772431 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 11:26:40.772479 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:26:40.772527 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 11:26:40.772579 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 11:26:40.772627 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 11:26:40.772713 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:26:40.772765 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 11:26:40.772814 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 11:26:40.772862 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 11:26:40.772910 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:26:40.772958 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 11:26:40.773006 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 11:26:40.773055 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:26:40.773107 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 11:26:40.773156 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 11:26:40.773205 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:26:40.773253 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 11:26:40.773327 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 11:26:40.773377 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:26:40.773443 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 11:26:40.773491 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 11:26:40.773540 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:26:40.773590 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 11:26:40.773639 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 11:26:40.773711 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:26:40.773761 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 11:26:40.773808 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 11:26:40.773856 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 11:26:40.773904 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:26:40.773961 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 11:26:40.774010 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 11:26:40.774059 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 11:26:40.774110 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:26:40.774158 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 11:26:40.774206 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 11:26:40.774255 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:26:40.775732 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 11:26:40.775787 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 11:26:40.775839 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:26:40.775889 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 11:26:40.775940 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 11:26:40.775993 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:26:40.776042 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 11:26:40.776091 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 11:26:40.776139 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:26:40.776189 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 11:26:40.776238 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 11:26:40.776291 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:26:40.776339 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 11:26:40.776388 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 11:26:40.776436 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:26:40.776488 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 11:26:40.776533 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 11:26:40.776576 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 11:26:40.776619 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 29 11:26:40.777272 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 29 11:26:40.777328 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 29 11:26:40.777376 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 29 11:26:40.777425 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:26:40.777469 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 11:26:40.777514 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 11:26:40.777559 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 11:26:40.777602 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 29 11:26:40.777646 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 29 11:26:40.777704 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 29 11:26:40.777750 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 29 11:26:40.777797 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:26:40.777846 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 29 11:26:40.778199 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 29 11:26:40.778249 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:26:40.778301 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 29 11:26:40.778347 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 29 11:26:40.778395 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:26:40.778445 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 29 11:26:40.778491 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:26:40.778540 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 29 11:26:40.778585 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:26:40.778634 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 29 11:26:40.778948 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:26:40.779004 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 29 11:26:40.779051 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:26:40.779102 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 29 11:26:40.779156 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:26:40.779215 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 29 11:26:40.779290 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 29 11:26:40.779352 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:26:40.779401 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 29 11:26:40.779447 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 29 11:26:40.779493 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:26:40.779547 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 29 11:26:40.779600 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 29 11:26:40.780723 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:26:40.780785 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 29 11:26:40.780833 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:26:40.780883 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 29 11:26:40.780929 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:26:40.780978 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 29 11:26:40.781027 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:26:40.781076 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 29 11:26:40.781122 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:26:40.781171 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 29 11:26:40.781217 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:26:40.781288 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 29 11:26:40.781351 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 29 11:26:40.781399 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:26:40.781728 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 29 11:26:40.781779 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 29 11:26:40.781825 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:26:40.781874 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 29 11:26:40.781920 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 29 11:26:40.781968 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:26:40.782020 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 29 11:26:40.782066 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:26:40.782114 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 29 11:26:40.782160 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:26:40.782208 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 29 11:26:40.782254 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:26:40.782307 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 29 11:26:40.782353 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:26:40.782402 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 29 11:26:40.782448 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:26:40.782500 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 29 11:26:40.782549 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 29 11:26:40.782595 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:26:40.782644 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 29 11:26:40.783737 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 29 11:26:40.783789 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:26:40.783840 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 29 11:26:40.783886 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:26:40.783943 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 29 11:26:40.783990 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:26:40.784040 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 29 11:26:40.784086 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:26:40.784136 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 29 11:26:40.784182 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:26:40.784233 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 29 11:26:40.784299 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:26:40.784349 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 29 11:26:40.784395 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:26:40.784451 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:26:40.784461 kernel: PCI: CLS 32 bytes, default 64 Jan 29 11:26:40.784468 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:26:40.784477 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 11:26:40.784483 kernel: clocksource: Switched to clocksource tsc Jan 29 11:26:40.784489 kernel: Initialise system trusted keyrings Jan 29 11:26:40.784496 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:26:40.784502 kernel: Key type asymmetric registered Jan 29 11:26:40.784509 kernel: Asymmetric key parser 'x509' registered Jan 29 11:26:40.784515 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:26:40.784522 kernel: io scheduler mq-deadline registered Jan 29 11:26:40.784528 kernel: io scheduler kyber registered Jan 29 11:26:40.784537 kernel: io scheduler bfq registered Jan 29 11:26:40.784589 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 29 11:26:40.784641 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.784702 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 29 11:26:40.784754 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.784805 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 29 11:26:40.784859 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.785732 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 29 11:26:40.785794 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.785849 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 29 11:26:40.785917 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.785969 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 29 11:26:40.786019 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786072 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 29 11:26:40.786123 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786174 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 29 11:26:40.786225 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786281 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 29 11:26:40.786334 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786384 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 29 11:26:40.786434 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786485 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 29 11:26:40.786535 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786584 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 29 11:26:40.786635 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786740 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 29 11:26:40.786792 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786843 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 29 11:26:40.786893 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786952 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 29 11:26:40.787006 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787058 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 29 11:26:40.787109 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787160 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 29 11:26:40.787211 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787286 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 29 11:26:40.787355 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787408 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 29 11:26:40.787458 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787509 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 29 11:26:40.787559 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787610 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 29 11:26:40.787691 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787750 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 29 11:26:40.787800 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787851 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 29 11:26:40.787901 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787951 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 29 11:26:40.788003 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788054 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 29 11:26:40.788104 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788155 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 29 11:26:40.788204 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788254 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 29 11:26:40.788306 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788356 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 29 11:26:40.788405 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788455 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 29 11:26:40.788505 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788555 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 29 11:26:40.788608 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788671 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 29 11:26:40.788722 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788772 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 29 11:26:40.788821 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788833 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:26:40.788840 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:26:40.788846 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:26:40.788852 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 29 11:26:40.788859 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:26:40.788865 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:26:40.788915 kernel: rtc_cmos 00:01: registered as rtc0 Jan 29 11:26:40.788962 kernel: rtc_cmos 00:01: setting system clock to 2025-01-29T11:26:40 UTC (1738150000) Jan 29 11:26:40.789010 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 29 11:26:40.789019 kernel: intel_pstate: CPU model not supported Jan 29 11:26:40.789025 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:26:40.789032 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:26:40.789038 kernel: Segment Routing with IPv6 Jan 29 11:26:40.789044 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:26:40.789050 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:26:40.789057 kernel: Key type dns_resolver registered Jan 29 11:26:40.789063 kernel: IPI shorthand broadcast: enabled Jan 29 11:26:40.789071 kernel: sched_clock: Marking stable (876003630, 221215751)->(1149471547, -52252166) Jan 29 11:26:40.789077 kernel: registered taskstats version 1 Jan 29 11:26:40.789083 kernel: Loading compiled-in X.509 certificates Jan 29 11:26:40.789089 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:26:40.789095 kernel: Key type .fscrypt registered Jan 29 11:26:40.789101 kernel: Key type fscrypt-provisioning registered Jan 29 11:26:40.789107 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:26:40.789114 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:26:40.789121 kernel: ima: No architecture policies found Jan 29 11:26:40.789127 kernel: clk: Disabling unused clocks Jan 29 11:26:40.789133 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:26:40.789140 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:26:40.789146 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:26:40.789152 kernel: Run /init as init process Jan 29 11:26:40.789158 kernel: with arguments: Jan 29 11:26:40.789165 kernel: /init Jan 29 11:26:40.789171 kernel: with environment: Jan 29 11:26:40.789178 kernel: HOME=/ Jan 29 11:26:40.789184 kernel: TERM=linux Jan 29 11:26:40.789190 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:26:40.789197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:26:40.789205 systemd[1]: Detected virtualization vmware. Jan 29 11:26:40.789212 systemd[1]: Detected architecture x86-64. Jan 29 11:26:40.789218 systemd[1]: Running in initrd. Jan 29 11:26:40.789224 systemd[1]: No hostname configured, using default hostname. Jan 29 11:26:40.789232 systemd[1]: Hostname set to . Jan 29 11:26:40.789239 systemd[1]: Initializing machine ID from random generator. Jan 29 11:26:40.789245 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:26:40.789251 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:26:40.789257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:26:40.789286 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:26:40.789294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:26:40.789301 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:26:40.789324 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:26:40.789331 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:26:40.789338 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:26:40.789344 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:26:40.789350 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:26:40.789356 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:26:40.789363 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:26:40.789370 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:26:40.789377 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:26:40.789383 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:26:40.789390 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:26:40.789396 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:26:40.789402 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:26:40.789408 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:26:40.789415 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:26:40.789422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:26:40.789428 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:26:40.789435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:26:40.789441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:26:40.789448 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:26:40.789454 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:26:40.789460 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:26:40.789467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:26:40.789484 systemd-journald[217]: Collecting audit messages is disabled. Jan 29 11:26:40.789502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:26:40.789508 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:26:40.789515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:26:40.789521 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:26:40.789529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:26:40.789536 kernel: Bridge firewalling registered Jan 29 11:26:40.789542 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:26:40.789549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:26:40.789557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:40.789563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:26:40.789570 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:26:40.789576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:26:40.789582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:26:40.789589 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:26:40.789595 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:26:40.789602 systemd-journald[217]: Journal started Jan 29 11:26:40.789617 systemd-journald[217]: Runtime Journal (/run/log/journal/060a5e949f6e4fdc803f7906ee631692) is 4.8M, max 38.7M, 33.8M free. Jan 29 11:26:40.732929 systemd-modules-load[218]: Inserted module 'overlay' Jan 29 11:26:40.790798 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:26:40.755946 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 29 11:26:40.795770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:26:40.795951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:26:40.798735 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:26:40.800758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:26:40.805720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:26:40.808772 dracut-cmdline[249]: dracut-dracut-053 Jan 29 11:26:40.810277 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:26:40.823152 systemd-resolved[253]: Positive Trust Anchors: Jan 29 11:26:40.823159 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:26:40.823180 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:26:40.825820 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 29 11:26:40.826355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:26:40.826501 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:26:40.853673 kernel: SCSI subsystem initialized Jan 29 11:26:40.860667 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:26:40.866668 kernel: iscsi: registered transport (tcp) Jan 29 11:26:40.879667 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:26:40.879690 kernel: QLogic iSCSI HBA Driver Jan 29 11:26:40.899441 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:26:40.902764 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:26:40.917678 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:26:40.917708 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:26:40.917717 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:26:40.951717 kernel: raid6: avx2x4 gen() 52145 MB/s Jan 29 11:26:40.966668 kernel: raid6: avx2x2 gen() 53834 MB/s Jan 29 11:26:40.983870 kernel: raid6: avx2x1 gen() 45208 MB/s Jan 29 11:26:40.983890 kernel: raid6: using algorithm avx2x2 gen() 53834 MB/s Jan 29 11:26:41.001860 kernel: raid6: .... xor() 31631 MB/s, rmw enabled Jan 29 11:26:41.001882 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:26:41.015666 kernel: xor: automatically using best checksumming function avx Jan 29 11:26:41.116681 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:26:41.122367 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:26:41.126754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:26:41.134292 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jan 29 11:26:41.136841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:26:41.141787 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:26:41.149156 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Jan 29 11:26:41.164599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:26:41.168910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:26:41.240845 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:26:41.244759 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:26:41.253093 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:26:41.253549 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:26:41.253760 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:26:41.254702 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:26:41.259848 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:26:41.267274 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:26:41.310685 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 29 11:26:41.315764 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 29 11:26:41.315787 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 29 11:26:41.336245 kernel: vmw_pvscsi: using 64bit dma Jan 29 11:26:41.336266 kernel: vmw_pvscsi: max_id: 16 Jan 29 11:26:41.336279 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 29 11:26:41.336287 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 29 11:26:41.336295 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 29 11:26:41.336302 kernel: vmw_pvscsi: using MSI-X Jan 29 11:26:41.336309 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 29 11:26:41.336401 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 29 11:26:41.336481 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:26:41.336489 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 29 11:26:41.336563 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 29 11:26:41.337136 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:26:41.337379 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:26:41.337732 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:26:41.338008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:26:41.338217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:41.338474 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:26:41.343682 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:26:41.343738 kernel: AES CTR mode by8 optimization enabled Jan 29 11:26:41.344673 kernel: libata version 3.00 loaded. Jan 29 11:26:41.345824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:26:41.348690 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 29 11:26:41.353860 kernel: scsi host1: ata_piix Jan 29 11:26:41.353940 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 29 11:26:41.354013 kernel: scsi host2: ata_piix Jan 29 11:26:41.354082 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 29 11:26:41.354091 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 29 11:26:41.361283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:41.365797 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:26:41.378287 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:26:41.524674 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 29 11:26:41.530688 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 29 11:26:41.541767 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 29 11:26:41.547331 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 11:26:41.547420 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 29 11:26:41.547488 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 29 11:26:41.547550 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 29 11:26:41.547609 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:26:41.547619 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 11:26:41.557695 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 29 11:26:41.569604 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:26:41.569628 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:26:41.573669 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (490) Jan 29 11:26:41.575515 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 29 11:26:41.579577 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 29 11:26:41.579762 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (487) Jan 29 11:26:41.582562 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 29 11:26:41.584980 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 29 11:26:41.585264 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 29 11:26:41.592841 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:26:41.617676 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:26:42.625701 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:26:42.625750 disk-uuid[587]: The operation has completed successfully. Jan 29 11:26:42.664404 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:26:42.664476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:26:42.669761 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:26:42.674581 sh[605]: Success Jan 29 11:26:42.690670 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:26:42.770860 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:26:42.780591 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:26:42.780855 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:26:42.818431 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:26:42.818491 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:26:42.818502 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:26:42.819525 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:26:42.820332 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:26:42.827687 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:26:42.830318 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:26:42.834800 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 29 11:26:42.835951 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:26:42.884846 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:42.884889 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:26:42.886672 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:26:42.894566 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:26:42.898877 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:26:42.900723 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:42.906565 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:26:42.915970 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:26:42.936052 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 11:26:42.940777 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:26:43.022211 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:26:43.022895 ignition[665]: Ignition 2.20.0 Jan 29 11:26:43.022899 ignition[665]: Stage: fetch-offline Jan 29 11:26:43.022918 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.022923 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.022973 ignition[665]: parsed url from cmdline: "" Jan 29 11:26:43.022975 ignition[665]: no config URL provided Jan 29 11:26:43.022978 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:26:43.022982 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:26:43.023355 ignition[665]: config successfully fetched Jan 29 11:26:43.023371 ignition[665]: parsing config with SHA512: 2097afec7ba2fdd3c665c36c830442456cfb5eb53856f569f4c0ebfe44702751146a6195cd979dcf7d0e5af4b72e3a8e518c588efc9f8ac5392578edc6ca29b4 Jan 29 11:26:43.025894 unknown[665]: fetched base config from "system" Jan 29 11:26:43.026112 ignition[665]: fetch-offline: fetch-offline passed Jan 29 11:26:43.025900 unknown[665]: fetched user config from "vmware" Jan 29 11:26:43.026152 ignition[665]: Ignition finished successfully Jan 29 11:26:43.026785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:26:43.027196 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:26:43.041759 systemd-networkd[797]: lo: Link UP Jan 29 11:26:43.041766 systemd-networkd[797]: lo: Gained carrier Jan 29 11:26:43.042483 systemd-networkd[797]: Enumeration completed Jan 29 11:26:43.042556 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:26:43.042739 systemd[1]: Reached target network.target - Network. Jan 29 11:26:43.042810 systemd-networkd[797]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 29 11:26:43.046489 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 29 11:26:43.046611 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 29 11:26:43.042837 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:26:43.046275 systemd-networkd[797]: ens192: Link UP Jan 29 11:26:43.046277 systemd-networkd[797]: ens192: Gained carrier Jan 29 11:26:43.047803 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:26:43.057092 ignition[800]: Ignition 2.20.0 Jan 29 11:26:43.057103 ignition[800]: Stage: kargs Jan 29 11:26:43.057227 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.057234 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.057817 ignition[800]: kargs: kargs passed Jan 29 11:26:43.057843 ignition[800]: Ignition finished successfully Jan 29 11:26:43.059030 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:26:43.064832 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:26:43.071839 ignition[807]: Ignition 2.20.0 Jan 29 11:26:43.071846 ignition[807]: Stage: disks Jan 29 11:26:43.071950 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.071956 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.072452 ignition[807]: disks: disks passed Jan 29 11:26:43.072486 ignition[807]: Ignition finished successfully Jan 29 11:26:43.073208 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:26:43.073410 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:26:43.073508 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:26:43.073609 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:26:43.073704 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:26:43.073889 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:26:43.079834 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:26:43.103235 systemd-fsck[815]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:26:43.104234 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:26:43.108798 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:26:43.170573 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:26:43.170831 kernel: EXT4-fs (sda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:26:43.171046 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:26:43.175736 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:26:43.176715 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:26:43.177850 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:26:43.177918 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:26:43.177934 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:26:43.182053 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:26:43.184771 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:26:43.186640 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (823) Jan 29 11:26:43.186706 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:43.186717 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:26:43.187291 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:26:43.190680 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:26:43.191431 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:26:43.242186 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:26:43.245055 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:26:43.247640 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:26:43.250086 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:26:43.332781 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:26:43.340781 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:26:43.343187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:26:43.346667 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:43.362669 ignition[935]: INFO : Ignition 2.20.0 Jan 29 11:26:43.362669 ignition[935]: INFO : Stage: mount Jan 29 11:26:43.362669 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.362669 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.364390 ignition[935]: INFO : mount: mount passed Jan 29 11:26:43.364390 ignition[935]: INFO : Ignition finished successfully Jan 29 11:26:43.365125 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:26:43.369746 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:26:43.369980 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:26:43.816802 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:26:43.821761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:26:43.829704 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (948) Jan 29 11:26:43.832104 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:43.832124 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:26:43.832132 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:26:43.835669 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:26:43.836956 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:26:43.854525 ignition[965]: INFO : Ignition 2.20.0 Jan 29 11:26:43.854829 ignition[965]: INFO : Stage: files Jan 29 11:26:43.855220 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.855359 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.856283 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:26:43.857075 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:26:43.857075 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:26:43.859248 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:26:43.859384 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:26:43.859521 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:26:43.859465 unknown[965]: wrote ssh authorized keys file for user: core Jan 29 11:26:43.872798 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:26:43.872964 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 11:26:43.908189 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:26:44.000303 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:26:44.001259 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:26:44.001259 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:26:44.001259 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:26:44.003965 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:26:44.004113 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:26:44.004113 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:26:44.004453 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:26:44.004453 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:26:44.004453 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 11:26:44.370352 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:26:44.570136 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:26:44.570457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 11:26:44.570457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 11:26:44.570457 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:26:44.571017 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:26:44.616256 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:26:44.619422 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:26:44.619613 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:26:44.619613 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:26:44.619613 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:26:44.620006 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:26:44.620006 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:26:44.620006 ignition[965]: INFO : files: files passed Jan 29 11:26:44.620006 ignition[965]: INFO : Ignition finished successfully Jan 29 11:26:44.620794 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:26:44.624803 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:26:44.626453 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:26:44.627004 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:26:44.627177 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:26:44.633744 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:26:44.633744 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:26:44.634132 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:26:44.634719 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:26:44.635049 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:26:44.638810 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:26:44.650816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:26:44.650879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:26:44.651289 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:26:44.651410 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:26:44.651610 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:26:44.652059 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:26:44.661119 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:26:44.664814 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:26:44.670252 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:26:44.670431 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:26:44.670661 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:26:44.670844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:26:44.670918 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:26:44.671176 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:26:44.671415 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:26:44.671597 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:26:44.671797 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:26:44.672002 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:26:44.672215 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:26:44.672566 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:26:44.672810 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:26:44.673021 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:26:44.673215 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:26:44.673376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:26:44.673436 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:26:44.673684 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:26:44.673834 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:26:44.674019 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:26:44.674065 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:26:44.674232 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:26:44.674289 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:26:44.674526 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:26:44.674592 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:26:44.674886 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:26:44.675025 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:26:44.678725 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:26:44.678890 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:26:44.679085 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:26:44.679268 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:26:44.679336 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:26:44.679543 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:26:44.679588 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:26:44.679733 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:26:44.679793 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:26:44.680029 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:26:44.680084 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:26:44.687781 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:26:44.687880 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:26:44.687967 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:26:44.689775 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:26:44.689880 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:26:44.689968 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:26:44.690153 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:26:44.690230 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:26:44.692779 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:26:44.692846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:26:44.696781 ignition[1020]: INFO : Ignition 2.20.0 Jan 29 11:26:44.702922 ignition[1020]: INFO : Stage: umount Jan 29 11:26:44.702922 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:44.702922 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:44.702922 ignition[1020]: INFO : umount: umount passed Jan 29 11:26:44.702922 ignition[1020]: INFO : Ignition finished successfully Jan 29 11:26:44.703816 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:26:44.703909 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:26:44.704207 systemd[1]: Stopped target network.target - Network. Jan 29 11:26:44.704312 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:26:44.704348 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:26:44.704481 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:26:44.704509 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:26:44.704646 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:26:44.704691 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:26:44.704815 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:26:44.704836 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:26:44.705057 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:26:44.706701 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:26:44.711721 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:26:44.712828 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:26:44.712895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:26:44.715189 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:26:44.715267 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:26:44.716622 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:26:44.716765 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:26:44.720760 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:26:44.720863 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:26:44.720898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:26:44.721148 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 29 11:26:44.721171 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 11:26:44.721736 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:26:44.721762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:26:44.722026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:26:44.722049 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:26:44.722151 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:26:44.722173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:26:44.722333 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:26:44.730508 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:26:44.730586 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:26:44.737260 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:26:44.737358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:26:44.737743 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:26:44.737776 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:26:44.737993 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:26:44.738015 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:26:44.738176 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:26:44.738201 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:26:44.738515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:26:44.738548 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:26:44.738866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:26:44.738899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:26:44.743795 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:26:44.743934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:26:44.743967 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:26:44.744791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:26:44.744831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:44.747788 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:26:44.747869 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:26:44.781933 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:26:44.782019 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:26:44.782287 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:26:44.782414 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:26:44.782439 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:26:44.786795 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:26:44.798803 systemd[1]: Switching root. Jan 29 11:26:44.823290 systemd-journald[217]: Journal stopped Jan 29 11:26:40.735345 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Wed Jan 29 09:36:13 -00 2025 Jan 29 11:26:40.735362 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:26:40.735368 kernel: Disabled fast string operations Jan 29 11:26:40.735372 kernel: BIOS-provided physical RAM map: Jan 29 11:26:40.735376 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ebff] usable Jan 29 11:26:40.735380 kernel: BIOS-e820: [mem 0x000000000009ec00-0x000000000009ffff] reserved Jan 29 11:26:40.735386 kernel: BIOS-e820: [mem 0x00000000000dc000-0x00000000000fffff] reserved Jan 29 11:26:40.735391 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007fedffff] usable Jan 29 11:26:40.735395 kernel: BIOS-e820: [mem 0x000000007fee0000-0x000000007fefefff] ACPI data Jan 29 11:26:40.735399 kernel: BIOS-e820: [mem 0x000000007feff000-0x000000007fefffff] ACPI NVS Jan 29 11:26:40.735403 kernel: BIOS-e820: [mem 0x000000007ff00000-0x000000007fffffff] usable Jan 29 11:26:40.735407 kernel: BIOS-e820: [mem 0x00000000f0000000-0x00000000f7ffffff] reserved Jan 29 11:26:40.735411 kernel: BIOS-e820: [mem 0x00000000fec00000-0x00000000fec0ffff] reserved Jan 29 11:26:40.735416 kernel: BIOS-e820: [mem 0x00000000fee00000-0x00000000fee00fff] reserved Jan 29 11:26:40.735422 kernel: BIOS-e820: [mem 0x00000000fffe0000-0x00000000ffffffff] reserved Jan 29 11:26:40.735427 kernel: NX (Execute Disable) protection: active Jan 29 11:26:40.735432 kernel: APIC: Static calls initialized Jan 29 11:26:40.735436 kernel: SMBIOS 2.7 present. Jan 29 11:26:40.735441 kernel: DMI: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 05/28/2020 Jan 29 11:26:40.735446 kernel: vmware: hypercall mode: 0x00 Jan 29 11:26:40.735451 kernel: Hypervisor detected: VMware Jan 29 11:26:40.735455 kernel: vmware: TSC freq read from hypervisor : 3408.000 MHz Jan 29 11:26:40.735461 kernel: vmware: Host bus clock speed read from hypervisor : 66000000 Hz Jan 29 11:26:40.735466 kernel: vmware: using clock offset of 2745653214 ns Jan 29 11:26:40.735471 kernel: tsc: Detected 3408.000 MHz processor Jan 29 11:26:40.735476 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:26:40.735481 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:26:40.735486 kernel: last_pfn = 0x80000 max_arch_pfn = 0x400000000 Jan 29 11:26:40.735491 kernel: total RAM covered: 3072M Jan 29 11:26:40.735495 kernel: Found optimal setting for mtrr clean up Jan 29 11:26:40.735501 kernel: gran_size: 64K chunk_size: 64K num_reg: 2 lose cover RAM: 0G Jan 29 11:26:40.735507 kernel: MTRR map: 6 entries (5 fixed + 1 variable; max 21), built from 8 variable MTRRs Jan 29 11:26:40.735511 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:26:40.735516 kernel: Using GB pages for direct mapping Jan 29 11:26:40.735521 kernel: ACPI: Early table checksum verification disabled Jan 29 11:26:40.735542 kernel: ACPI: RSDP 0x00000000000F6A00 000024 (v02 PTLTD ) Jan 29 11:26:40.735547 kernel: ACPI: XSDT 0x000000007FEE965B 00005C (v01 INTEL 440BX 06040000 VMW 01324272) Jan 29 11:26:40.735552 kernel: ACPI: FACP 0x000000007FEFEE73 0000F4 (v04 INTEL 440BX 06040000 PTL 000F4240) Jan 29 11:26:40.735556 kernel: ACPI: DSDT 0x000000007FEEAD55 01411E (v01 PTLTD Custom 06040000 MSFT 03000001) Jan 29 11:26:40.735561 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 11:26:40.735568 kernel: ACPI: FACS 0x000000007FEFFFC0 000040 Jan 29 11:26:40.735573 kernel: ACPI: BOOT 0x000000007FEEAD2D 000028 (v01 PTLTD $SBFTBL$ 06040000 LTP 00000001) Jan 29 11:26:40.735578 kernel: ACPI: APIC 0x000000007FEEA5EB 000742 (v01 PTLTD ? APIC 06040000 LTP 00000000) Jan 29 11:26:40.735584 kernel: ACPI: MCFG 0x000000007FEEA5AF 00003C (v01 PTLTD $PCITBL$ 06040000 LTP 00000001) Jan 29 11:26:40.735588 kernel: ACPI: SRAT 0x000000007FEE9757 0008A8 (v02 VMWARE MEMPLUG 06040000 VMW 00000001) Jan 29 11:26:40.735595 kernel: ACPI: HPET 0x000000007FEE971F 000038 (v01 VMWARE VMW HPET 06040000 VMW 00000001) Jan 29 11:26:40.735600 kernel: ACPI: WAET 0x000000007FEE96F7 000028 (v01 VMWARE VMW WAET 06040000 VMW 00000001) Jan 29 11:26:40.735605 kernel: ACPI: Reserving FACP table memory at [mem 0x7fefee73-0x7fefef66] Jan 29 11:26:40.735610 kernel: ACPI: Reserving DSDT table memory at [mem 0x7feead55-0x7fefee72] Jan 29 11:26:40.735615 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 11:26:40.735619 kernel: ACPI: Reserving FACS table memory at [mem 0x7fefffc0-0x7fefffff] Jan 29 11:26:40.735624 kernel: ACPI: Reserving BOOT table memory at [mem 0x7feead2d-0x7feead54] Jan 29 11:26:40.735630 kernel: ACPI: Reserving APIC table memory at [mem 0x7feea5eb-0x7feead2c] Jan 29 11:26:40.735634 kernel: ACPI: Reserving MCFG table memory at [mem 0x7feea5af-0x7feea5ea] Jan 29 11:26:40.735639 kernel: ACPI: Reserving SRAT table memory at [mem 0x7fee9757-0x7fee9ffe] Jan 29 11:26:40.735645 kernel: ACPI: Reserving HPET table memory at [mem 0x7fee971f-0x7fee9756] Jan 29 11:26:40.735650 kernel: ACPI: Reserving WAET table memory at [mem 0x7fee96f7-0x7fee971e] Jan 29 11:26:40.735665 kernel: system APIC only can use physical flat Jan 29 11:26:40.735671 kernel: APIC: Switched APIC routing to: physical flat Jan 29 11:26:40.735676 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:26:40.735681 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Jan 29 11:26:40.735686 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Jan 29 11:26:40.735691 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Jan 29 11:26:40.735696 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Jan 29 11:26:40.735703 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Jan 29 11:26:40.735708 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Jan 29 11:26:40.735713 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Jan 29 11:26:40.735718 kernel: SRAT: PXM 0 -> APIC 0x10 -> Node 0 Jan 29 11:26:40.735723 kernel: SRAT: PXM 0 -> APIC 0x12 -> Node 0 Jan 29 11:26:40.735727 kernel: SRAT: PXM 0 -> APIC 0x14 -> Node 0 Jan 29 11:26:40.735732 kernel: SRAT: PXM 0 -> APIC 0x16 -> Node 0 Jan 29 11:26:40.735737 kernel: SRAT: PXM 0 -> APIC 0x18 -> Node 0 Jan 29 11:26:40.735742 kernel: SRAT: PXM 0 -> APIC 0x1a -> Node 0 Jan 29 11:26:40.735747 kernel: SRAT: PXM 0 -> APIC 0x1c -> Node 0 Jan 29 11:26:40.735753 kernel: SRAT: PXM 0 -> APIC 0x1e -> Node 0 Jan 29 11:26:40.735758 kernel: SRAT: PXM 0 -> APIC 0x20 -> Node 0 Jan 29 11:26:40.735763 kernel: SRAT: PXM 0 -> APIC 0x22 -> Node 0 Jan 29 11:26:40.735773 kernel: SRAT: PXM 0 -> APIC 0x24 -> Node 0 Jan 29 11:26:40.735778 kernel: SRAT: PXM 0 -> APIC 0x26 -> Node 0 Jan 29 11:26:40.735783 kernel: SRAT: PXM 0 -> APIC 0x28 -> Node 0 Jan 29 11:26:40.735788 kernel: SRAT: PXM 0 -> APIC 0x2a -> Node 0 Jan 29 11:26:40.735793 kernel: SRAT: PXM 0 -> APIC 0x2c -> Node 0 Jan 29 11:26:40.735798 kernel: SRAT: PXM 0 -> APIC 0x2e -> Node 0 Jan 29 11:26:40.735803 kernel: SRAT: PXM 0 -> APIC 0x30 -> Node 0 Jan 29 11:26:40.735808 kernel: SRAT: PXM 0 -> APIC 0x32 -> Node 0 Jan 29 11:26:40.735814 kernel: SRAT: PXM 0 -> APIC 0x34 -> Node 0 Jan 29 11:26:40.735819 kernel: SRAT: PXM 0 -> APIC 0x36 -> Node 0 Jan 29 11:26:40.735824 kernel: SRAT: PXM 0 -> APIC 0x38 -> Node 0 Jan 29 11:26:40.735829 kernel: SRAT: PXM 0 -> APIC 0x3a -> Node 0 Jan 29 11:26:40.735834 kernel: SRAT: PXM 0 -> APIC 0x3c -> Node 0 Jan 29 11:26:40.735839 kernel: SRAT: PXM 0 -> APIC 0x3e -> Node 0 Jan 29 11:26:40.735844 kernel: SRAT: PXM 0 -> APIC 0x40 -> Node 0 Jan 29 11:26:40.735848 kernel: SRAT: PXM 0 -> APIC 0x42 -> Node 0 Jan 29 11:26:40.735853 kernel: SRAT: PXM 0 -> APIC 0x44 -> Node 0 Jan 29 11:26:40.735858 kernel: SRAT: PXM 0 -> APIC 0x46 -> Node 0 Jan 29 11:26:40.735864 kernel: SRAT: PXM 0 -> APIC 0x48 -> Node 0 Jan 29 11:26:40.735869 kernel: SRAT: PXM 0 -> APIC 0x4a -> Node 0 Jan 29 11:26:40.735874 kernel: SRAT: PXM 0 -> APIC 0x4c -> Node 0 Jan 29 11:26:40.735879 kernel: SRAT: PXM 0 -> APIC 0x4e -> Node 0 Jan 29 11:26:40.735884 kernel: SRAT: PXM 0 -> APIC 0x50 -> Node 0 Jan 29 11:26:40.735889 kernel: SRAT: PXM 0 -> APIC 0x52 -> Node 0 Jan 29 11:26:40.735893 kernel: SRAT: PXM 0 -> APIC 0x54 -> Node 0 Jan 29 11:26:40.735898 kernel: SRAT: PXM 0 -> APIC 0x56 -> Node 0 Jan 29 11:26:40.735903 kernel: SRAT: PXM 0 -> APIC 0x58 -> Node 0 Jan 29 11:26:40.735908 kernel: SRAT: PXM 0 -> APIC 0x5a -> Node 0 Jan 29 11:26:40.735914 kernel: SRAT: PXM 0 -> APIC 0x5c -> Node 0 Jan 29 11:26:40.735919 kernel: SRAT: PXM 0 -> APIC 0x5e -> Node 0 Jan 29 11:26:40.735923 kernel: SRAT: PXM 0 -> APIC 0x60 -> Node 0 Jan 29 11:26:40.735928 kernel: SRAT: PXM 0 -> APIC 0x62 -> Node 0 Jan 29 11:26:40.735933 kernel: SRAT: PXM 0 -> APIC 0x64 -> Node 0 Jan 29 11:26:40.735938 kernel: SRAT: PXM 0 -> APIC 0x66 -> Node 0 Jan 29 11:26:40.735943 kernel: SRAT: PXM 0 -> APIC 0x68 -> Node 0 Jan 29 11:26:40.735947 kernel: SRAT: PXM 0 -> APIC 0x6a -> Node 0 Jan 29 11:26:40.735953 kernel: SRAT: PXM 0 -> APIC 0x6c -> Node 0 Jan 29 11:26:40.735957 kernel: SRAT: PXM 0 -> APIC 0x6e -> Node 0 Jan 29 11:26:40.735963 kernel: SRAT: PXM 0 -> APIC 0x70 -> Node 0 Jan 29 11:26:40.735968 kernel: SRAT: PXM 0 -> APIC 0x72 -> Node 0 Jan 29 11:26:40.735973 kernel: SRAT: PXM 0 -> APIC 0x74 -> Node 0 Jan 29 11:26:40.735982 kernel: SRAT: PXM 0 -> APIC 0x76 -> Node 0 Jan 29 11:26:40.735988 kernel: SRAT: PXM 0 -> APIC 0x78 -> Node 0 Jan 29 11:26:40.735993 kernel: SRAT: PXM 0 -> APIC 0x7a -> Node 0 Jan 29 11:26:40.735998 kernel: SRAT: PXM 0 -> APIC 0x7c -> Node 0 Jan 29 11:26:40.736003 kernel: SRAT: PXM 0 -> APIC 0x7e -> Node 0 Jan 29 11:26:40.736009 kernel: SRAT: PXM 0 -> APIC 0x80 -> Node 0 Jan 29 11:26:40.736015 kernel: SRAT: PXM 0 -> APIC 0x82 -> Node 0 Jan 29 11:26:40.736020 kernel: SRAT: PXM 0 -> APIC 0x84 -> Node 0 Jan 29 11:26:40.736025 kernel: SRAT: PXM 0 -> APIC 0x86 -> Node 0 Jan 29 11:26:40.736030 kernel: SRAT: PXM 0 -> APIC 0x88 -> Node 0 Jan 29 11:26:40.736035 kernel: SRAT: PXM 0 -> APIC 0x8a -> Node 0 Jan 29 11:26:40.736041 kernel: SRAT: PXM 0 -> APIC 0x8c -> Node 0 Jan 29 11:26:40.736046 kernel: SRAT: PXM 0 -> APIC 0x8e -> Node 0 Jan 29 11:26:40.736051 kernel: SRAT: PXM 0 -> APIC 0x90 -> Node 0 Jan 29 11:26:40.736056 kernel: SRAT: PXM 0 -> APIC 0x92 -> Node 0 Jan 29 11:26:40.736061 kernel: SRAT: PXM 0 -> APIC 0x94 -> Node 0 Jan 29 11:26:40.736068 kernel: SRAT: PXM 0 -> APIC 0x96 -> Node 0 Jan 29 11:26:40.736073 kernel: SRAT: PXM 0 -> APIC 0x98 -> Node 0 Jan 29 11:26:40.736078 kernel: SRAT: PXM 0 -> APIC 0x9a -> Node 0 Jan 29 11:26:40.736084 kernel: SRAT: PXM 0 -> APIC 0x9c -> Node 0 Jan 29 11:26:40.736089 kernel: SRAT: PXM 0 -> APIC 0x9e -> Node 0 Jan 29 11:26:40.736094 kernel: SRAT: PXM 0 -> APIC 0xa0 -> Node 0 Jan 29 11:26:40.736099 kernel: SRAT: PXM 0 -> APIC 0xa2 -> Node 0 Jan 29 11:26:40.736104 kernel: SRAT: PXM 0 -> APIC 0xa4 -> Node 0 Jan 29 11:26:40.736109 kernel: SRAT: PXM 0 -> APIC 0xa6 -> Node 0 Jan 29 11:26:40.736115 kernel: SRAT: PXM 0 -> APIC 0xa8 -> Node 0 Jan 29 11:26:40.736121 kernel: SRAT: PXM 0 -> APIC 0xaa -> Node 0 Jan 29 11:26:40.736126 kernel: SRAT: PXM 0 -> APIC 0xac -> Node 0 Jan 29 11:26:40.736131 kernel: SRAT: PXM 0 -> APIC 0xae -> Node 0 Jan 29 11:26:40.736137 kernel: SRAT: PXM 0 -> APIC 0xb0 -> Node 0 Jan 29 11:26:40.736142 kernel: SRAT: PXM 0 -> APIC 0xb2 -> Node 0 Jan 29 11:26:40.736147 kernel: SRAT: PXM 0 -> APIC 0xb4 -> Node 0 Jan 29 11:26:40.736152 kernel: SRAT: PXM 0 -> APIC 0xb6 -> Node 0 Jan 29 11:26:40.736157 kernel: SRAT: PXM 0 -> APIC 0xb8 -> Node 0 Jan 29 11:26:40.736162 kernel: SRAT: PXM 0 -> APIC 0xba -> Node 0 Jan 29 11:26:40.736168 kernel: SRAT: PXM 0 -> APIC 0xbc -> Node 0 Jan 29 11:26:40.736173 kernel: SRAT: PXM 0 -> APIC 0xbe -> Node 0 Jan 29 11:26:40.736179 kernel: SRAT: PXM 0 -> APIC 0xc0 -> Node 0 Jan 29 11:26:40.736184 kernel: SRAT: PXM 0 -> APIC 0xc2 -> Node 0 Jan 29 11:26:40.736189 kernel: SRAT: PXM 0 -> APIC 0xc4 -> Node 0 Jan 29 11:26:40.736195 kernel: SRAT: PXM 0 -> APIC 0xc6 -> Node 0 Jan 29 11:26:40.736200 kernel: SRAT: PXM 0 -> APIC 0xc8 -> Node 0 Jan 29 11:26:40.736205 kernel: SRAT: PXM 0 -> APIC 0xca -> Node 0 Jan 29 11:26:40.736210 kernel: SRAT: PXM 0 -> APIC 0xcc -> Node 0 Jan 29 11:26:40.736215 kernel: SRAT: PXM 0 -> APIC 0xce -> Node 0 Jan 29 11:26:40.736221 kernel: SRAT: PXM 0 -> APIC 0xd0 -> Node 0 Jan 29 11:26:40.736226 kernel: SRAT: PXM 0 -> APIC 0xd2 -> Node 0 Jan 29 11:26:40.736232 kernel: SRAT: PXM 0 -> APIC 0xd4 -> Node 0 Jan 29 11:26:40.736237 kernel: SRAT: PXM 0 -> APIC 0xd6 -> Node 0 Jan 29 11:26:40.736243 kernel: SRAT: PXM 0 -> APIC 0xd8 -> Node 0 Jan 29 11:26:40.736248 kernel: SRAT: PXM 0 -> APIC 0xda -> Node 0 Jan 29 11:26:40.736253 kernel: SRAT: PXM 0 -> APIC 0xdc -> Node 0 Jan 29 11:26:40.736258 kernel: SRAT: PXM 0 -> APIC 0xde -> Node 0 Jan 29 11:26:40.736263 kernel: SRAT: PXM 0 -> APIC 0xe0 -> Node 0 Jan 29 11:26:40.736268 kernel: SRAT: PXM 0 -> APIC 0xe2 -> Node 0 Jan 29 11:26:40.736274 kernel: SRAT: PXM 0 -> APIC 0xe4 -> Node 0 Jan 29 11:26:40.736279 kernel: SRAT: PXM 0 -> APIC 0xe6 -> Node 0 Jan 29 11:26:40.736285 kernel: SRAT: PXM 0 -> APIC 0xe8 -> Node 0 Jan 29 11:26:40.736290 kernel: SRAT: PXM 0 -> APIC 0xea -> Node 0 Jan 29 11:26:40.736295 kernel: SRAT: PXM 0 -> APIC 0xec -> Node 0 Jan 29 11:26:40.736300 kernel: SRAT: PXM 0 -> APIC 0xee -> Node 0 Jan 29 11:26:40.736306 kernel: SRAT: PXM 0 -> APIC 0xf0 -> Node 0 Jan 29 11:26:40.736311 kernel: SRAT: PXM 0 -> APIC 0xf2 -> Node 0 Jan 29 11:26:40.736316 kernel: SRAT: PXM 0 -> APIC 0xf4 -> Node 0 Jan 29 11:26:40.736321 kernel: SRAT: PXM 0 -> APIC 0xf6 -> Node 0 Jan 29 11:26:40.736326 kernel: SRAT: PXM 0 -> APIC 0xf8 -> Node 0 Jan 29 11:26:40.736332 kernel: SRAT: PXM 0 -> APIC 0xfa -> Node 0 Jan 29 11:26:40.736337 kernel: SRAT: PXM 0 -> APIC 0xfc -> Node 0 Jan 29 11:26:40.736343 kernel: SRAT: PXM 0 -> APIC 0xfe -> Node 0 Jan 29 11:26:40.736348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:26:40.736354 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:26:40.736359 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000-0xbfffffff] hotplug Jan 29 11:26:40.736364 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7fffffff] -> [mem 0x00000000-0x7fffffff] Jan 29 11:26:40.736370 kernel: NODE_DATA(0) allocated [mem 0x7fffa000-0x7fffffff] Jan 29 11:26:40.736375 kernel: Zone ranges: Jan 29 11:26:40.736381 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:26:40.736386 kernel: DMA32 [mem 0x0000000001000000-0x000000007fffffff] Jan 29 11:26:40.736393 kernel: Normal empty Jan 29 11:26:40.736398 kernel: Movable zone start for each node Jan 29 11:26:40.736403 kernel: Early memory node ranges Jan 29 11:26:40.736409 kernel: node 0: [mem 0x0000000000001000-0x000000000009dfff] Jan 29 11:26:40.736414 kernel: node 0: [mem 0x0000000000100000-0x000000007fedffff] Jan 29 11:26:40.736419 kernel: node 0: [mem 0x000000007ff00000-0x000000007fffffff] Jan 29 11:26:40.736425 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007fffffff] Jan 29 11:26:40.736430 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:26:40.736435 kernel: On node 0, zone DMA: 98 pages in unavailable ranges Jan 29 11:26:40.736441 kernel: On node 0, zone DMA32: 32 pages in unavailable ranges Jan 29 11:26:40.736447 kernel: ACPI: PM-Timer IO Port: 0x1008 Jan 29 11:26:40.736452 kernel: system APIC only can use physical flat Jan 29 11:26:40.736457 kernel: ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) Jan 29 11:26:40.736463 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) Jan 29 11:26:40.736468 kernel: ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) Jan 29 11:26:40.736473 kernel: ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) Jan 29 11:26:40.736478 kernel: ACPI: LAPIC_NMI (acpi_id[0x04] high edge lint[0x1]) Jan 29 11:26:40.736484 kernel: ACPI: LAPIC_NMI (acpi_id[0x05] high edge lint[0x1]) Jan 29 11:26:40.736489 kernel: ACPI: LAPIC_NMI (acpi_id[0x06] high edge lint[0x1]) Jan 29 11:26:40.736495 kernel: ACPI: LAPIC_NMI (acpi_id[0x07] high edge lint[0x1]) Jan 29 11:26:40.736501 kernel: ACPI: LAPIC_NMI (acpi_id[0x08] high edge lint[0x1]) Jan 29 11:26:40.736506 kernel: ACPI: LAPIC_NMI (acpi_id[0x09] high edge lint[0x1]) Jan 29 11:26:40.736511 kernel: ACPI: LAPIC_NMI (acpi_id[0x0a] high edge lint[0x1]) Jan 29 11:26:40.736516 kernel: ACPI: LAPIC_NMI (acpi_id[0x0b] high edge lint[0x1]) Jan 29 11:26:40.736521 kernel: ACPI: LAPIC_NMI (acpi_id[0x0c] high edge lint[0x1]) Jan 29 11:26:40.736527 kernel: ACPI: LAPIC_NMI (acpi_id[0x0d] high edge lint[0x1]) Jan 29 11:26:40.736532 kernel: ACPI: LAPIC_NMI (acpi_id[0x0e] high edge lint[0x1]) Jan 29 11:26:40.736537 kernel: ACPI: LAPIC_NMI (acpi_id[0x0f] high edge lint[0x1]) Jan 29 11:26:40.736542 kernel: ACPI: LAPIC_NMI (acpi_id[0x10] high edge lint[0x1]) Jan 29 11:26:40.736549 kernel: ACPI: LAPIC_NMI (acpi_id[0x11] high edge lint[0x1]) Jan 29 11:26:40.736554 kernel: ACPI: LAPIC_NMI (acpi_id[0x12] high edge lint[0x1]) Jan 29 11:26:40.736559 kernel: ACPI: LAPIC_NMI (acpi_id[0x13] high edge lint[0x1]) Jan 29 11:26:40.736564 kernel: ACPI: LAPIC_NMI (acpi_id[0x14] high edge lint[0x1]) Jan 29 11:26:40.736569 kernel: ACPI: LAPIC_NMI (acpi_id[0x15] high edge lint[0x1]) Jan 29 11:26:40.736575 kernel: ACPI: LAPIC_NMI (acpi_id[0x16] high edge lint[0x1]) Jan 29 11:26:40.736580 kernel: ACPI: LAPIC_NMI (acpi_id[0x17] high edge lint[0x1]) Jan 29 11:26:40.736585 kernel: ACPI: LAPIC_NMI (acpi_id[0x18] high edge lint[0x1]) Jan 29 11:26:40.736590 kernel: ACPI: LAPIC_NMI (acpi_id[0x19] high edge lint[0x1]) Jan 29 11:26:40.736596 kernel: ACPI: LAPIC_NMI (acpi_id[0x1a] high edge lint[0x1]) Jan 29 11:26:40.736602 kernel: ACPI: LAPIC_NMI (acpi_id[0x1b] high edge lint[0x1]) Jan 29 11:26:40.736607 kernel: ACPI: LAPIC_NMI (acpi_id[0x1c] high edge lint[0x1]) Jan 29 11:26:40.736612 kernel: ACPI: LAPIC_NMI (acpi_id[0x1d] high edge lint[0x1]) Jan 29 11:26:40.736617 kernel: ACPI: LAPIC_NMI (acpi_id[0x1e] high edge lint[0x1]) Jan 29 11:26:40.736623 kernel: ACPI: LAPIC_NMI (acpi_id[0x1f] high edge lint[0x1]) Jan 29 11:26:40.736628 kernel: ACPI: LAPIC_NMI (acpi_id[0x20] high edge lint[0x1]) Jan 29 11:26:40.736633 kernel: ACPI: LAPIC_NMI (acpi_id[0x21] high edge lint[0x1]) Jan 29 11:26:40.736638 kernel: ACPI: LAPIC_NMI (acpi_id[0x22] high edge lint[0x1]) Jan 29 11:26:40.736644 kernel: ACPI: LAPIC_NMI (acpi_id[0x23] high edge lint[0x1]) Jan 29 11:26:40.736650 kernel: ACPI: LAPIC_NMI (acpi_id[0x24] high edge lint[0x1]) Jan 29 11:26:40.736975 kernel: ACPI: LAPIC_NMI (acpi_id[0x25] high edge lint[0x1]) Jan 29 11:26:40.738958 kernel: ACPI: LAPIC_NMI (acpi_id[0x26] high edge lint[0x1]) Jan 29 11:26:40.738966 kernel: ACPI: LAPIC_NMI (acpi_id[0x27] high edge lint[0x1]) Jan 29 11:26:40.738972 kernel: ACPI: LAPIC_NMI (acpi_id[0x28] high edge lint[0x1]) Jan 29 11:26:40.738977 kernel: ACPI: LAPIC_NMI (acpi_id[0x29] high edge lint[0x1]) Jan 29 11:26:40.738983 kernel: ACPI: LAPIC_NMI (acpi_id[0x2a] high edge lint[0x1]) Jan 29 11:26:40.738988 kernel: ACPI: LAPIC_NMI (acpi_id[0x2b] high edge lint[0x1]) Jan 29 11:26:40.738993 kernel: ACPI: LAPIC_NMI (acpi_id[0x2c] high edge lint[0x1]) Jan 29 11:26:40.738998 kernel: ACPI: LAPIC_NMI (acpi_id[0x2d] high edge lint[0x1]) Jan 29 11:26:40.739006 kernel: ACPI: LAPIC_NMI (acpi_id[0x2e] high edge lint[0x1]) Jan 29 11:26:40.739012 kernel: ACPI: LAPIC_NMI (acpi_id[0x2f] high edge lint[0x1]) Jan 29 11:26:40.739017 kernel: ACPI: LAPIC_NMI (acpi_id[0x30] high edge lint[0x1]) Jan 29 11:26:40.739022 kernel: ACPI: LAPIC_NMI (acpi_id[0x31] high edge lint[0x1]) Jan 29 11:26:40.739027 kernel: ACPI: LAPIC_NMI (acpi_id[0x32] high edge lint[0x1]) Jan 29 11:26:40.739033 kernel: ACPI: LAPIC_NMI (acpi_id[0x33] high edge lint[0x1]) Jan 29 11:26:40.739038 kernel: ACPI: LAPIC_NMI (acpi_id[0x34] high edge lint[0x1]) Jan 29 11:26:40.739043 kernel: ACPI: LAPIC_NMI (acpi_id[0x35] high edge lint[0x1]) Jan 29 11:26:40.739048 kernel: ACPI: LAPIC_NMI (acpi_id[0x36] high edge lint[0x1]) Jan 29 11:26:40.739055 kernel: ACPI: LAPIC_NMI (acpi_id[0x37] high edge lint[0x1]) Jan 29 11:26:40.739060 kernel: ACPI: LAPIC_NMI (acpi_id[0x38] high edge lint[0x1]) Jan 29 11:26:40.739065 kernel: ACPI: LAPIC_NMI (acpi_id[0x39] high edge lint[0x1]) Jan 29 11:26:40.739070 kernel: ACPI: LAPIC_NMI (acpi_id[0x3a] high edge lint[0x1]) Jan 29 11:26:40.739076 kernel: ACPI: LAPIC_NMI (acpi_id[0x3b] high edge lint[0x1]) Jan 29 11:26:40.739081 kernel: ACPI: LAPIC_NMI (acpi_id[0x3c] high edge lint[0x1]) Jan 29 11:26:40.739086 kernel: ACPI: LAPIC_NMI (acpi_id[0x3d] high edge lint[0x1]) Jan 29 11:26:40.739091 kernel: ACPI: LAPIC_NMI (acpi_id[0x3e] high edge lint[0x1]) Jan 29 11:26:40.739097 kernel: ACPI: LAPIC_NMI (acpi_id[0x3f] high edge lint[0x1]) Jan 29 11:26:40.739102 kernel: ACPI: LAPIC_NMI (acpi_id[0x40] high edge lint[0x1]) Jan 29 11:26:40.739108 kernel: ACPI: LAPIC_NMI (acpi_id[0x41] high edge lint[0x1]) Jan 29 11:26:40.739114 kernel: ACPI: LAPIC_NMI (acpi_id[0x42] high edge lint[0x1]) Jan 29 11:26:40.739119 kernel: ACPI: LAPIC_NMI (acpi_id[0x43] high edge lint[0x1]) Jan 29 11:26:40.739124 kernel: ACPI: LAPIC_NMI (acpi_id[0x44] high edge lint[0x1]) Jan 29 11:26:40.739129 kernel: ACPI: LAPIC_NMI (acpi_id[0x45] high edge lint[0x1]) Jan 29 11:26:40.739134 kernel: ACPI: LAPIC_NMI (acpi_id[0x46] high edge lint[0x1]) Jan 29 11:26:40.739140 kernel: ACPI: LAPIC_NMI (acpi_id[0x47] high edge lint[0x1]) Jan 29 11:26:40.739145 kernel: ACPI: LAPIC_NMI (acpi_id[0x48] high edge lint[0x1]) Jan 29 11:26:40.739150 kernel: ACPI: LAPIC_NMI (acpi_id[0x49] high edge lint[0x1]) Jan 29 11:26:40.739155 kernel: ACPI: LAPIC_NMI (acpi_id[0x4a] high edge lint[0x1]) Jan 29 11:26:40.739162 kernel: ACPI: LAPIC_NMI (acpi_id[0x4b] high edge lint[0x1]) Jan 29 11:26:40.739167 kernel: ACPI: LAPIC_NMI (acpi_id[0x4c] high edge lint[0x1]) Jan 29 11:26:40.739172 kernel: ACPI: LAPIC_NMI (acpi_id[0x4d] high edge lint[0x1]) Jan 29 11:26:40.739178 kernel: ACPI: LAPIC_NMI (acpi_id[0x4e] high edge lint[0x1]) Jan 29 11:26:40.739183 kernel: ACPI: LAPIC_NMI (acpi_id[0x4f] high edge lint[0x1]) Jan 29 11:26:40.739188 kernel: ACPI: LAPIC_NMI (acpi_id[0x50] high edge lint[0x1]) Jan 29 11:26:40.739193 kernel: ACPI: LAPIC_NMI (acpi_id[0x51] high edge lint[0x1]) Jan 29 11:26:40.739198 kernel: ACPI: LAPIC_NMI (acpi_id[0x52] high edge lint[0x1]) Jan 29 11:26:40.739203 kernel: ACPI: LAPIC_NMI (acpi_id[0x53] high edge lint[0x1]) Jan 29 11:26:40.739209 kernel: ACPI: LAPIC_NMI (acpi_id[0x54] high edge lint[0x1]) Jan 29 11:26:40.739215 kernel: ACPI: LAPIC_NMI (acpi_id[0x55] high edge lint[0x1]) Jan 29 11:26:40.739220 kernel: ACPI: LAPIC_NMI (acpi_id[0x56] high edge lint[0x1]) Jan 29 11:26:40.739226 kernel: ACPI: LAPIC_NMI (acpi_id[0x57] high edge lint[0x1]) Jan 29 11:26:40.739231 kernel: ACPI: LAPIC_NMI (acpi_id[0x58] high edge lint[0x1]) Jan 29 11:26:40.739236 kernel: ACPI: LAPIC_NMI (acpi_id[0x59] high edge lint[0x1]) Jan 29 11:26:40.739241 kernel: ACPI: LAPIC_NMI (acpi_id[0x5a] high edge lint[0x1]) Jan 29 11:26:40.739247 kernel: ACPI: LAPIC_NMI (acpi_id[0x5b] high edge lint[0x1]) Jan 29 11:26:40.739252 kernel: ACPI: LAPIC_NMI (acpi_id[0x5c] high edge lint[0x1]) Jan 29 11:26:40.739257 kernel: ACPI: LAPIC_NMI (acpi_id[0x5d] high edge lint[0x1]) Jan 29 11:26:40.739263 kernel: ACPI: LAPIC_NMI (acpi_id[0x5e] high edge lint[0x1]) Jan 29 11:26:40.739268 kernel: ACPI: LAPIC_NMI (acpi_id[0x5f] high edge lint[0x1]) Jan 29 11:26:40.739274 kernel: ACPI: LAPIC_NMI (acpi_id[0x60] high edge lint[0x1]) Jan 29 11:26:40.739279 kernel: ACPI: LAPIC_NMI (acpi_id[0x61] high edge lint[0x1]) Jan 29 11:26:40.739284 kernel: ACPI: LAPIC_NMI (acpi_id[0x62] high edge lint[0x1]) Jan 29 11:26:40.739289 kernel: ACPI: LAPIC_NMI (acpi_id[0x63] high edge lint[0x1]) Jan 29 11:26:40.739295 kernel: ACPI: LAPIC_NMI (acpi_id[0x64] high edge lint[0x1]) Jan 29 11:26:40.739300 kernel: ACPI: LAPIC_NMI (acpi_id[0x65] high edge lint[0x1]) Jan 29 11:26:40.739305 kernel: ACPI: LAPIC_NMI (acpi_id[0x66] high edge lint[0x1]) Jan 29 11:26:40.739310 kernel: ACPI: LAPIC_NMI (acpi_id[0x67] high edge lint[0x1]) Jan 29 11:26:40.739316 kernel: ACPI: LAPIC_NMI (acpi_id[0x68] high edge lint[0x1]) Jan 29 11:26:40.739322 kernel: ACPI: LAPIC_NMI (acpi_id[0x69] high edge lint[0x1]) Jan 29 11:26:40.739327 kernel: ACPI: LAPIC_NMI (acpi_id[0x6a] high edge lint[0x1]) Jan 29 11:26:40.739332 kernel: ACPI: LAPIC_NMI (acpi_id[0x6b] high edge lint[0x1]) Jan 29 11:26:40.739337 kernel: ACPI: LAPIC_NMI (acpi_id[0x6c] high edge lint[0x1]) Jan 29 11:26:40.739343 kernel: ACPI: LAPIC_NMI (acpi_id[0x6d] high edge lint[0x1]) Jan 29 11:26:40.739348 kernel: ACPI: LAPIC_NMI (acpi_id[0x6e] high edge lint[0x1]) Jan 29 11:26:40.739353 kernel: ACPI: LAPIC_NMI (acpi_id[0x6f] high edge lint[0x1]) Jan 29 11:26:40.739358 kernel: ACPI: LAPIC_NMI (acpi_id[0x70] high edge lint[0x1]) Jan 29 11:26:40.739365 kernel: ACPI: LAPIC_NMI (acpi_id[0x71] high edge lint[0x1]) Jan 29 11:26:40.739370 kernel: ACPI: LAPIC_NMI (acpi_id[0x72] high edge lint[0x1]) Jan 29 11:26:40.739376 kernel: ACPI: LAPIC_NMI (acpi_id[0x73] high edge lint[0x1]) Jan 29 11:26:40.739381 kernel: ACPI: LAPIC_NMI (acpi_id[0x74] high edge lint[0x1]) Jan 29 11:26:40.739386 kernel: ACPI: LAPIC_NMI (acpi_id[0x75] high edge lint[0x1]) Jan 29 11:26:40.739392 kernel: ACPI: LAPIC_NMI (acpi_id[0x76] high edge lint[0x1]) Jan 29 11:26:40.739397 kernel: ACPI: LAPIC_NMI (acpi_id[0x77] high edge lint[0x1]) Jan 29 11:26:40.739402 kernel: ACPI: LAPIC_NMI (acpi_id[0x78] high edge lint[0x1]) Jan 29 11:26:40.739407 kernel: ACPI: LAPIC_NMI (acpi_id[0x79] high edge lint[0x1]) Jan 29 11:26:40.739412 kernel: ACPI: LAPIC_NMI (acpi_id[0x7a] high edge lint[0x1]) Jan 29 11:26:40.739419 kernel: ACPI: LAPIC_NMI (acpi_id[0x7b] high edge lint[0x1]) Jan 29 11:26:40.739424 kernel: ACPI: LAPIC_NMI (acpi_id[0x7c] high edge lint[0x1]) Jan 29 11:26:40.739429 kernel: ACPI: LAPIC_NMI (acpi_id[0x7d] high edge lint[0x1]) Jan 29 11:26:40.739435 kernel: ACPI: LAPIC_NMI (acpi_id[0x7e] high edge lint[0x1]) Jan 29 11:26:40.739440 kernel: ACPI: LAPIC_NMI (acpi_id[0x7f] high edge lint[0x1]) Jan 29 11:26:40.739446 kernel: IOAPIC[0]: apic_id 1, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:26:40.739451 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) Jan 29 11:26:40.739456 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:26:40.739462 kernel: ACPI: HPET id: 0x8086af01 base: 0xfed00000 Jan 29 11:26:40.739467 kernel: TSC deadline timer available Jan 29 11:26:40.739474 kernel: smpboot: Allowing 128 CPUs, 126 hotplug CPUs Jan 29 11:26:40.739479 kernel: [mem 0x80000000-0xefffffff] available for PCI devices Jan 29 11:26:40.739484 kernel: Booting paravirtualized kernel on VMware hypervisor Jan 29 11:26:40.739490 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:26:40.739495 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:128 nr_cpu_ids:128 nr_node_ids:1 Jan 29 11:26:40.739501 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Jan 29 11:26:40.739507 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Jan 29 11:26:40.739512 kernel: pcpu-alloc: [0] 000 001 002 003 004 005 006 007 Jan 29 11:26:40.739517 kernel: pcpu-alloc: [0] 008 009 010 011 012 013 014 015 Jan 29 11:26:40.739523 kernel: pcpu-alloc: [0] 016 017 018 019 020 021 022 023 Jan 29 11:26:40.739529 kernel: pcpu-alloc: [0] 024 025 026 027 028 029 030 031 Jan 29 11:26:40.739534 kernel: pcpu-alloc: [0] 032 033 034 035 036 037 038 039 Jan 29 11:26:40.739547 kernel: pcpu-alloc: [0] 040 041 042 043 044 045 046 047 Jan 29 11:26:40.739553 kernel: pcpu-alloc: [0] 048 049 050 051 052 053 054 055 Jan 29 11:26:40.739559 kernel: pcpu-alloc: [0] 056 057 058 059 060 061 062 063 Jan 29 11:26:40.739564 kernel: pcpu-alloc: [0] 064 065 066 067 068 069 070 071 Jan 29 11:26:40.739570 kernel: pcpu-alloc: [0] 072 073 074 075 076 077 078 079 Jan 29 11:26:40.739577 kernel: pcpu-alloc: [0] 080 081 082 083 084 085 086 087 Jan 29 11:26:40.739582 kernel: pcpu-alloc: [0] 088 089 090 091 092 093 094 095 Jan 29 11:26:40.739588 kernel: pcpu-alloc: [0] 096 097 098 099 100 101 102 103 Jan 29 11:26:40.739593 kernel: pcpu-alloc: [0] 104 105 106 107 108 109 110 111 Jan 29 11:26:40.739599 kernel: pcpu-alloc: [0] 112 113 114 115 116 117 118 119 Jan 29 11:26:40.739605 kernel: pcpu-alloc: [0] 120 121 122 123 124 125 126 127 Jan 29 11:26:40.739611 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:26:40.739617 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:26:40.739624 kernel: random: crng init done Jan 29 11:26:40.739630 kernel: printk: log_buf_len individual max cpu contribution: 4096 bytes Jan 29 11:26:40.739635 kernel: printk: log_buf_len total cpu_extra contributions: 520192 bytes Jan 29 11:26:40.739641 kernel: printk: log_buf_len min size: 262144 bytes Jan 29 11:26:40.739646 kernel: printk: log_buf_len: 1048576 bytes Jan 29 11:26:40.739652 kernel: printk: early log buf free: 239648(91%) Jan 29 11:26:40.739668 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:26:40.739674 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:26:40.739680 kernel: Fallback order for Node 0: 0 Jan 29 11:26:40.739688 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515808 Jan 29 11:26:40.739694 kernel: Policy zone: DMA32 Jan 29 11:26:40.739699 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:26:40.739705 kernel: Memory: 1936352K/2096628K available (12288K kernel code, 2301K rwdata, 22736K rodata, 42972K init, 2220K bss, 160016K reserved, 0K cma-reserved) Jan 29 11:26:40.739712 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=128, Nodes=1 Jan 29 11:26:40.739718 kernel: ftrace: allocating 37923 entries in 149 pages Jan 29 11:26:40.739725 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:26:40.739730 kernel: Dynamic Preempt: voluntary Jan 29 11:26:40.739736 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:26:40.739742 kernel: rcu: RCU event tracing is enabled. Jan 29 11:26:40.739748 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=128. Jan 29 11:26:40.739754 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:26:40.739760 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:26:40.739765 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:26:40.739771 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:26:40.739778 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=128 Jan 29 11:26:40.739784 kernel: NR_IRQS: 33024, nr_irqs: 1448, preallocated irqs: 16 Jan 29 11:26:40.739789 kernel: rcu: srcu_init: Setting srcu_struct sizes to big. Jan 29 11:26:40.739795 kernel: Console: colour VGA+ 80x25 Jan 29 11:26:40.739800 kernel: printk: console [tty0] enabled Jan 29 11:26:40.739806 kernel: printk: console [ttyS0] enabled Jan 29 11:26:40.739812 kernel: ACPI: Core revision 20230628 Jan 29 11:26:40.739818 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 133484882848 ns Jan 29 11:26:40.739824 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:26:40.739829 kernel: x2apic enabled Jan 29 11:26:40.739836 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:26:40.739842 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:26:40.739848 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 11:26:40.739853 kernel: Calibrating delay loop (skipped) preset value.. 6816.00 BogoMIPS (lpj=3408000) Jan 29 11:26:40.739859 kernel: Disabled fast string operations Jan 29 11:26:40.739865 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jan 29 11:26:40.739871 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jan 29 11:26:40.739876 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:26:40.739886 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jan 29 11:26:40.739893 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jan 29 11:26:40.739899 kernel: Spectre V2 : Mitigation: Enhanced / Automatic IBRS Jan 29 11:26:40.739904 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:26:40.739910 kernel: Spectre V2 : Spectre v2 / PBRSB-eIBRS: Retire a single CALL on VMEXIT Jan 29 11:26:40.739916 kernel: RETBleed: Mitigation: Enhanced IBRS Jan 29 11:26:40.739922 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:26:40.739928 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:26:40.739933 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:26:40.739941 kernel: SRBDS: Unknown: Dependent on hypervisor status Jan 29 11:26:40.739946 kernel: GDS: Unknown: Dependent on hypervisor status Jan 29 11:26:40.739952 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:26:40.739958 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:26:40.739963 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:26:40.739969 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:26:40.739975 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Jan 29 11:26:40.739980 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:26:40.739986 kernel: pid_max: default: 131072 minimum: 1024 Jan 29 11:26:40.739993 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:26:40.739999 kernel: landlock: Up and running. Jan 29 11:26:40.740004 kernel: SELinux: Initializing. Jan 29 11:26:40.740010 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:26:40.740016 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:26:40.740022 kernel: smpboot: CPU0: Intel(R) Xeon(R) E-2278G CPU @ 3.40GHz (family: 0x6, model: 0x9e, stepping: 0xd) Jan 29 11:26:40.740028 kernel: RCU Tasks: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:26:40.740033 kernel: RCU Tasks Rude: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:26:40.740039 kernel: RCU Tasks Trace: Setting shift to 7 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=128. Jan 29 11:26:40.740046 kernel: Performance Events: Skylake events, core PMU driver. Jan 29 11:26:40.740052 kernel: core: CPUID marked event: 'cpu cycles' unavailable Jan 29 11:26:40.740058 kernel: core: CPUID marked event: 'instructions' unavailable Jan 29 11:26:40.740063 kernel: core: CPUID marked event: 'bus cycles' unavailable Jan 29 11:26:40.740069 kernel: core: CPUID marked event: 'cache references' unavailable Jan 29 11:26:40.740074 kernel: core: CPUID marked event: 'cache misses' unavailable Jan 29 11:26:40.740080 kernel: core: CPUID marked event: 'branch instructions' unavailable Jan 29 11:26:40.740087 kernel: core: CPUID marked event: 'branch misses' unavailable Jan 29 11:26:40.740093 kernel: ... version: 1 Jan 29 11:26:40.740099 kernel: ... bit width: 48 Jan 29 11:26:40.740105 kernel: ... generic registers: 4 Jan 29 11:26:40.740110 kernel: ... value mask: 0000ffffffffffff Jan 29 11:26:40.740116 kernel: ... max period: 000000007fffffff Jan 29 11:26:40.740122 kernel: ... fixed-purpose events: 0 Jan 29 11:26:40.740127 kernel: ... event mask: 000000000000000f Jan 29 11:26:40.740133 kernel: signal: max sigframe size: 1776 Jan 29 11:26:40.740138 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:26:40.740144 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:26:40.740151 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:26:40.740157 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:26:40.740163 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:26:40.740168 kernel: .... node #0, CPUs: #1 Jan 29 11:26:40.740174 kernel: Disabled fast string operations Jan 29 11:26:40.740180 kernel: smpboot: CPU 1 Converting physical 2 to logical package 1 Jan 29 11:26:40.740185 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Jan 29 11:26:40.740191 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:26:40.740197 kernel: smpboot: Max logical packages: 128 Jan 29 11:26:40.740202 kernel: smpboot: Total of 2 processors activated (13632.00 BogoMIPS) Jan 29 11:26:40.740210 kernel: devtmpfs: initialized Jan 29 11:26:40.740216 kernel: x86/mm: Memory block size: 128MB Jan 29 11:26:40.740221 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x7feff000-0x7fefffff] (4096 bytes) Jan 29 11:26:40.740227 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:26:40.740233 kernel: futex hash table entries: 32768 (order: 9, 2097152 bytes, linear) Jan 29 11:26:40.740239 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:26:40.740244 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:26:40.740250 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:26:40.740279 kernel: audit: type=2000 audit(1738149999.066:1): state=initialized audit_enabled=0 res=1 Jan 29 11:26:40.740286 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:26:40.740292 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:26:40.740298 kernel: cpuidle: using governor menu Jan 29 11:26:40.740320 kernel: Simple Boot Flag at 0x36 set to 0x80 Jan 29 11:26:40.740326 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:26:40.740332 kernel: dca service started, version 1.12.1 Jan 29 11:26:40.740337 kernel: PCI: MMCONFIG for domain 0000 [bus 00-7f] at [mem 0xf0000000-0xf7ffffff] (base 0xf0000000) Jan 29 11:26:40.740343 kernel: PCI: Using configuration type 1 for base access Jan 29 11:26:40.740350 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:26:40.740356 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 29 11:26:40.740361 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jan 29 11:26:40.740367 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:26:40.740373 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:26:40.740378 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:26:40.740384 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:26:40.740390 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:26:40.740395 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:26:40.740402 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:26:40.740408 kernel: ACPI: [Firmware Bug]: BIOS _OSI(Linux) query ignored Jan 29 11:26:40.740414 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:26:40.740419 kernel: ACPI: Interpreter enabled Jan 29 11:26:40.740425 kernel: ACPI: PM: (supports S0 S1 S5) Jan 29 11:26:40.740431 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:26:40.740436 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:26:40.740442 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:26:40.740448 kernel: ACPI: Enabled 4 GPEs in block 00 to 0F Jan 29 11:26:40.740455 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-7f]) Jan 29 11:26:40.740529 kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:26:40.740584 kernel: acpi PNP0A03:00: _OSC: platform does not support [AER LTR] Jan 29 11:26:40.740634 kernel: acpi PNP0A03:00: _OSC: OS now controls [PCIeHotplug PME PCIeCapability] Jan 29 11:26:40.740643 kernel: PCI host bridge to bus 0000:00 Jan 29 11:26:40.740709 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:26:40.740755 kernel: pci_bus 0000:00: root bus resource [mem 0x000cc000-0x000dbfff window] Jan 29 11:26:40.740801 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Jan 29 11:26:40.740845 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:26:40.740889 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xfeff window] Jan 29 11:26:40.740932 kernel: pci_bus 0000:00: root bus resource [bus 00-7f] Jan 29 11:26:40.740989 kernel: pci 0000:00:00.0: [8086:7190] type 00 class 0x060000 Jan 29 11:26:40.741043 kernel: pci 0000:00:01.0: [8086:7191] type 01 class 0x060400 Jan 29 11:26:40.741103 kernel: pci 0000:00:07.0: [8086:7110] type 00 class 0x060100 Jan 29 11:26:40.741157 kernel: pci 0000:00:07.1: [8086:7111] type 00 class 0x01018a Jan 29 11:26:40.741206 kernel: pci 0000:00:07.1: reg 0x20: [io 0x1060-0x106f] Jan 29 11:26:40.741254 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:26:40.741338 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:26:40.741386 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:26:40.741434 kernel: pci 0000:00:07.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:26:40.741490 kernel: pci 0000:00:07.3: [8086:7113] type 00 class 0x068000 Jan 29 11:26:40.741539 kernel: pci 0000:00:07.3: quirk: [io 0x1000-0x103f] claimed by PIIX4 ACPI Jan 29 11:26:40.741606 kernel: pci 0000:00:07.3: quirk: [io 0x1040-0x104f] claimed by PIIX4 SMB Jan 29 11:26:40.741913 kernel: pci 0000:00:07.7: [15ad:0740] type 00 class 0x088000 Jan 29 11:26:40.741967 kernel: pci 0000:00:07.7: reg 0x10: [io 0x1080-0x10bf] Jan 29 11:26:40.742018 kernel: pci 0000:00:07.7: reg 0x14: [mem 0xfebfe000-0xfebfffff 64bit] Jan 29 11:26:40.742075 kernel: pci 0000:00:0f.0: [15ad:0405] type 00 class 0x030000 Jan 29 11:26:40.742124 kernel: pci 0000:00:0f.0: reg 0x10: [io 0x1070-0x107f] Jan 29 11:26:40.742173 kernel: pci 0000:00:0f.0: reg 0x14: [mem 0xe8000000-0xefffffff pref] Jan 29 11:26:40.742221 kernel: pci 0000:00:0f.0: reg 0x18: [mem 0xfe000000-0xfe7fffff] Jan 29 11:26:40.742269 kernel: pci 0000:00:0f.0: reg 0x30: [mem 0x00000000-0x00007fff pref] Jan 29 11:26:40.742318 kernel: pci 0000:00:0f.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:26:40.742371 kernel: pci 0000:00:11.0: [15ad:0790] type 01 class 0x060401 Jan 29 11:26:40.742427 kernel: pci 0000:00:15.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.742478 kernel: pci 0000:00:15.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.742531 kernel: pci 0000:00:15.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.742581 kernel: pci 0000:00:15.1: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.742634 kernel: pci 0000:00:15.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.743757 kernel: pci 0000:00:15.2: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.743823 kernel: pci 0000:00:15.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.743875 kernel: pci 0000:00:15.3: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.743929 kernel: pci 0000:00:15.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.743979 kernel: pci 0000:00:15.4: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744031 kernel: pci 0000:00:15.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744080 kernel: pci 0000:00:15.5: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744135 kernel: pci 0000:00:15.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744183 kernel: pci 0000:00:15.6: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744236 kernel: pci 0000:00:15.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744285 kernel: pci 0000:00:15.7: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744337 kernel: pci 0000:00:16.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744385 kernel: pci 0000:00:16.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744440 kernel: pci 0000:00:16.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744489 kernel: pci 0000:00:16.1: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744541 kernel: pci 0000:00:16.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744590 kernel: pci 0000:00:16.2: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744646 kernel: pci 0000:00:16.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744708 kernel: pci 0000:00:16.3: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744765 kernel: pci 0000:00:16.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744815 kernel: pci 0000:00:16.4: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744867 kernel: pci 0000:00:16.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.744916 kernel: pci 0000:00:16.5: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.744968 kernel: pci 0000:00:16.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745017 kernel: pci 0000:00:16.6: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745092 kernel: pci 0000:00:16.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745158 kernel: pci 0000:00:16.7: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745210 kernel: pci 0000:00:17.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745277 kernel: pci 0000:00:17.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745346 kernel: pci 0000:00:17.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745395 kernel: pci 0000:00:17.1: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745450 kernel: pci 0000:00:17.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745500 kernel: pci 0000:00:17.2: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.745551 kernel: pci 0000:00:17.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.745600 kernel: pci 0000:00:17.3: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.748668 kernel: pci 0000:00:17.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.748738 kernel: pci 0000:00:17.4: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.748795 kernel: pci 0000:00:17.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.748851 kernel: pci 0000:00:17.5: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.748911 kernel: pci 0000:00:17.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.748962 kernel: pci 0000:00:17.6: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749015 kernel: pci 0000:00:17.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749064 kernel: pci 0000:00:17.7: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749117 kernel: pci 0000:00:18.0: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749169 kernel: pci 0000:00:18.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749222 kernel: pci 0000:00:18.1: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749293 kernel: pci 0000:00:18.1: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749361 kernel: pci 0000:00:18.2: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749410 kernel: pci 0000:00:18.2: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749461 kernel: pci 0000:00:18.3: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749512 kernel: pci 0000:00:18.3: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749567 kernel: pci 0000:00:18.4: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749617 kernel: pci 0000:00:18.4: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749678 kernel: pci 0000:00:18.5: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749728 kernel: pci 0000:00:18.5: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749781 kernel: pci 0000:00:18.6: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749833 kernel: pci 0000:00:18.6: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749885 kernel: pci 0000:00:18.7: [15ad:07a0] type 01 class 0x060400 Jan 29 11:26:40.749934 kernel: pci 0000:00:18.7: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.749986 kernel: pci_bus 0000:01: extended config space not accessible Jan 29 11:26:40.750036 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:26:40.750087 kernel: pci_bus 0000:02: extended config space not accessible Jan 29 11:26:40.750096 kernel: acpiphp: Slot [32] registered Jan 29 11:26:40.750104 kernel: acpiphp: Slot [33] registered Jan 29 11:26:40.750110 kernel: acpiphp: Slot [34] registered Jan 29 11:26:40.750115 kernel: acpiphp: Slot [35] registered Jan 29 11:26:40.750121 kernel: acpiphp: Slot [36] registered Jan 29 11:26:40.750127 kernel: acpiphp: Slot [37] registered Jan 29 11:26:40.750133 kernel: acpiphp: Slot [38] registered Jan 29 11:26:40.750138 kernel: acpiphp: Slot [39] registered Jan 29 11:26:40.750144 kernel: acpiphp: Slot [40] registered Jan 29 11:26:40.750150 kernel: acpiphp: Slot [41] registered Jan 29 11:26:40.750156 kernel: acpiphp: Slot [42] registered Jan 29 11:26:40.750162 kernel: acpiphp: Slot [43] registered Jan 29 11:26:40.750168 kernel: acpiphp: Slot [44] registered Jan 29 11:26:40.750174 kernel: acpiphp: Slot [45] registered Jan 29 11:26:40.750179 kernel: acpiphp: Slot [46] registered Jan 29 11:26:40.750185 kernel: acpiphp: Slot [47] registered Jan 29 11:26:40.750191 kernel: acpiphp: Slot [48] registered Jan 29 11:26:40.750196 kernel: acpiphp: Slot [49] registered Jan 29 11:26:40.750202 kernel: acpiphp: Slot [50] registered Jan 29 11:26:40.750209 kernel: acpiphp: Slot [51] registered Jan 29 11:26:40.750215 kernel: acpiphp: Slot [52] registered Jan 29 11:26:40.750220 kernel: acpiphp: Slot [53] registered Jan 29 11:26:40.750226 kernel: acpiphp: Slot [54] registered Jan 29 11:26:40.750232 kernel: acpiphp: Slot [55] registered Jan 29 11:26:40.750237 kernel: acpiphp: Slot [56] registered Jan 29 11:26:40.750243 kernel: acpiphp: Slot [57] registered Jan 29 11:26:40.750249 kernel: acpiphp: Slot [58] registered Jan 29 11:26:40.750258 kernel: acpiphp: Slot [59] registered Jan 29 11:26:40.750265 kernel: acpiphp: Slot [60] registered Jan 29 11:26:40.750273 kernel: acpiphp: Slot [61] registered Jan 29 11:26:40.750279 kernel: acpiphp: Slot [62] registered Jan 29 11:26:40.750285 kernel: acpiphp: Slot [63] registered Jan 29 11:26:40.750338 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] (subtractive decode) Jan 29 11:26:40.750388 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 11:26:40.750436 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 11:26:40.750484 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:26:40.750533 kernel: pci 0000:00:11.0: bridge window [mem 0x000a0000-0x000bffff window] (subtractive decode) Jan 29 11:26:40.750584 kernel: pci 0000:00:11.0: bridge window [mem 0x000cc000-0x000dbfff window] (subtractive decode) Jan 29 11:26:40.750633 kernel: pci 0000:00:11.0: bridge window [mem 0xc0000000-0xfebfffff window] (subtractive decode) Jan 29 11:26:40.751736 kernel: pci 0000:00:11.0: bridge window [io 0x0000-0x0cf7 window] (subtractive decode) Jan 29 11:26:40.751787 kernel: pci 0000:00:11.0: bridge window [io 0x0d00-0xfeff window] (subtractive decode) Jan 29 11:26:40.751842 kernel: pci 0000:03:00.0: [15ad:07c0] type 00 class 0x010700 Jan 29 11:26:40.751893 kernel: pci 0000:03:00.0: reg 0x10: [io 0x4000-0x4007] Jan 29 11:26:40.751943 kernel: pci 0000:03:00.0: reg 0x14: [mem 0xfd5f8000-0xfd5fffff 64bit] Jan 29 11:26:40.751996 kernel: pci 0000:03:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 11:26:40.752045 kernel: pci 0000:03:00.0: PME# supported from D0 D3hot D3cold Jan 29 11:26:40.752095 kernel: pci 0000:03:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 11:26:40.752144 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 11:26:40.752192 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 11:26:40.752241 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 11:26:40.752289 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 11:26:40.752338 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 11:26:40.752389 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 11:26:40.752438 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:26:40.752487 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 11:26:40.752535 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 11:26:40.752584 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 11:26:40.752633 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:26:40.752690 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 11:26:40.752741 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 11:26:40.752790 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:26:40.752838 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 11:26:40.752886 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 11:26:40.752934 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:26:40.752986 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 11:26:40.753035 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 11:26:40.753084 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:26:40.753133 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 11:26:40.753181 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 11:26:40.753230 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:26:40.753283 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 11:26:40.753332 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 11:26:40.753382 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:26:40.753437 kernel: pci 0000:0b:00.0: [15ad:07b0] type 00 class 0x020000 Jan 29 11:26:40.753488 kernel: pci 0000:0b:00.0: reg 0x10: [mem 0xfd4fc000-0xfd4fcfff] Jan 29 11:26:40.753538 kernel: pci 0000:0b:00.0: reg 0x14: [mem 0xfd4fd000-0xfd4fdfff] Jan 29 11:26:40.753587 kernel: pci 0000:0b:00.0: reg 0x18: [mem 0xfd4fe000-0xfd4fffff] Jan 29 11:26:40.753636 kernel: pci 0000:0b:00.0: reg 0x1c: [io 0x5000-0x500f] Jan 29 11:26:40.755996 kernel: pci 0000:0b:00.0: reg 0x30: [mem 0x00000000-0x0000ffff pref] Jan 29 11:26:40.756055 kernel: pci 0000:0b:00.0: supports D1 D2 Jan 29 11:26:40.756106 kernel: pci 0000:0b:00.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 29 11:26:40.756155 kernel: pci 0000:0b:00.0: disabling ASPM on pre-1.1 PCIe device. You can enable it with 'pcie_aspm=force' Jan 29 11:26:40.756204 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 11:26:40.756252 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 11:26:40.756305 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 11:26:40.756355 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 11:26:40.756405 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 11:26:40.756455 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 11:26:40.756504 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:26:40.756554 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 11:26:40.756603 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 11:26:40.756651 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 11:26:40.758830 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:26:40.758883 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 11:26:40.758936 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 11:26:40.758986 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:26:40.759036 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 11:26:40.759086 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 11:26:40.759134 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:26:40.759183 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 11:26:40.759231 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 11:26:40.759298 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:26:40.759367 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 11:26:40.759416 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 11:26:40.759465 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:26:40.759514 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 11:26:40.759562 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 11:26:40.759610 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:26:40.759681 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 11:26:40.759731 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 11:26:40.759783 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 11:26:40.759831 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:26:40.759881 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 11:26:40.759929 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 11:26:40.759977 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 11:26:40.760024 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:26:40.760074 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 11:26:40.760122 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 11:26:40.760172 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 11:26:40.760220 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:26:40.760288 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 11:26:40.760353 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 11:26:40.760400 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:26:40.760449 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 11:26:40.760497 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 11:26:40.760544 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:26:40.760596 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 11:26:40.760677 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 11:26:40.760730 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:26:40.760779 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 11:26:40.760828 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 11:26:40.760876 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:26:40.760926 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 11:26:40.760975 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 11:26:40.761027 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:26:40.761076 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 11:26:40.761124 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 11:26:40.761172 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 11:26:40.761220 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:26:40.761292 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 11:26:40.761341 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 11:26:40.761393 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 11:26:40.761458 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:26:40.761508 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 11:26:40.761556 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 11:26:40.761605 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:26:40.761935 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 11:26:40.761998 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 11:26:40.762050 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:26:40.762104 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 11:26:40.762153 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 11:26:40.762202 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:26:40.762251 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 11:26:40.762300 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 11:26:40.762348 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:26:40.762397 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 11:26:40.762445 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 11:26:40.762495 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:26:40.762545 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 11:26:40.762593 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 11:26:40.762642 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:26:40.762650 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 9 Jan 29 11:26:40.762694 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 0 Jan 29 11:26:40.762701 kernel: ACPI: PCI: Interrupt link LNKB disabled Jan 29 11:26:40.762708 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:26:40.762713 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 10 Jan 29 11:26:40.762722 kernel: iommu: Default domain type: Translated Jan 29 11:26:40.762728 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:26:40.762734 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:26:40.762740 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:26:40.762746 kernel: e820: reserve RAM buffer [mem 0x0009ec00-0x0009ffff] Jan 29 11:26:40.762751 kernel: e820: reserve RAM buffer [mem 0x7fee0000-0x7fffffff] Jan 29 11:26:40.762806 kernel: pci 0000:00:0f.0: vgaarb: setting as boot VGA device Jan 29 11:26:40.762856 kernel: pci 0000:00:0f.0: vgaarb: bridge control possible Jan 29 11:26:40.762903 kernel: pci 0000:00:0f.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:26:40.762914 kernel: vgaarb: loaded Jan 29 11:26:40.762920 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0 Jan 29 11:26:40.762926 kernel: hpet0: 16 comparators, 64-bit 14.318180 MHz counter Jan 29 11:26:40.762932 kernel: clocksource: Switched to clocksource tsc-early Jan 29 11:26:40.762938 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:26:40.762944 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:26:40.762950 kernel: pnp: PnP ACPI init Jan 29 11:26:40.763003 kernel: system 00:00: [io 0x1000-0x103f] has been reserved Jan 29 11:26:40.763051 kernel: system 00:00: [io 0x1040-0x104f] has been reserved Jan 29 11:26:40.763095 kernel: system 00:00: [io 0x0cf0-0x0cf1] has been reserved Jan 29 11:26:40.763143 kernel: system 00:04: [mem 0xfed00000-0xfed003ff] has been reserved Jan 29 11:26:40.763191 kernel: pnp 00:06: [dma 2] Jan 29 11:26:40.763238 kernel: system 00:07: [io 0xfce0-0xfcff] has been reserved Jan 29 11:26:40.763302 kernel: system 00:07: [mem 0xf0000000-0xf7ffffff] has been reserved Jan 29 11:26:40.763364 kernel: system 00:07: [mem 0xfe800000-0xfe9fffff] has been reserved Jan 29 11:26:40.763372 kernel: pnp: PnP ACPI: found 8 devices Jan 29 11:26:40.763379 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:26:40.763385 kernel: NET: Registered PF_INET protocol family Jan 29 11:26:40.763390 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:26:40.763396 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:26:40.763402 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:26:40.763408 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:26:40.763413 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:26:40.763421 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:26:40.763427 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:26:40.763433 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:26:40.763439 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:26:40.763444 kernel: NET: Registered PF_XDP protocol family Jan 29 11:26:40.763493 kernel: pci 0000:00:15.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 03] add_size 200000 add_align 100000 Jan 29 11:26:40.763543 kernel: pci 0000:00:15.3: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 29 11:26:40.763592 kernel: pci 0000:00:15.4: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 29 11:26:40.763644 kernel: pci 0000:00:15.5: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 29 11:26:40.763706 kernel: pci 0000:00:15.6: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 29 11:26:40.763758 kernel: pci 0000:00:15.7: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 29 11:26:40.763819 kernel: pci 0000:00:16.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 29 11:26:40.763873 kernel: pci 0000:00:16.3: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 29 11:26:40.763922 kernel: pci 0000:00:16.4: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 29 11:26:40.763975 kernel: pci 0000:00:16.5: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 29 11:26:40.764025 kernel: pci 0000:00:16.6: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 29 11:26:40.764074 kernel: pci 0000:00:16.7: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 29 11:26:40.764124 kernel: pci 0000:00:17.3: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 29 11:26:40.764173 kernel: pci 0000:00:17.4: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 29 11:26:40.764225 kernel: pci 0000:00:17.5: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 29 11:26:40.764279 kernel: pci 0000:00:17.6: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 29 11:26:40.764328 kernel: pci 0000:00:17.7: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 29 11:26:40.764376 kernel: pci 0000:00:18.2: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 29 11:26:40.764425 kernel: pci 0000:00:18.3: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 29 11:26:40.764474 kernel: pci 0000:00:18.4: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 29 11:26:40.764526 kernel: pci 0000:00:18.5: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 29 11:26:40.764575 kernel: pci 0000:00:18.6: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 29 11:26:40.764624 kernel: pci 0000:00:18.7: bridge window [io 0x1000-0x0fff] to [bus 22] add_size 1000 Jan 29 11:26:40.764706 kernel: pci 0000:00:15.0: BAR 15: assigned [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:26:40.764756 kernel: pci 0000:00:16.0: BAR 15: assigned [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:26:40.764805 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.764853 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.764905 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.764953 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765002 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765051 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765123 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765176 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765224 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765293 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765362 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765411 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765460 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765509 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765557 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765607 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765711 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765764 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765816 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765865 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.765913 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.765961 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766008 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766057 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766105 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766153 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766204 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766252 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766338 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766386 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766435 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766483 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766531 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766578 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766629 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766684 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766733 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766781 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766829 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766877 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.766924 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.766972 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767023 kernel: pci 0000:00:18.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767071 kernel: pci 0000:00:18.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767119 kernel: pci 0000:00:18.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767167 kernel: pci 0000:00:18.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767216 kernel: pci 0000:00:18.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767270 kernel: pci 0000:00:18.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767318 kernel: pci 0000:00:18.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767366 kernel: pci 0000:00:18.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767428 kernel: pci 0000:00:18.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767478 kernel: pci 0000:00:18.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767529 kernel: pci 0000:00:18.2: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767578 kernel: pci 0000:00:18.2: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767626 kernel: pci 0000:00:17.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767728 kernel: pci 0000:00:17.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767779 kernel: pci 0000:00:17.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767827 kernel: pci 0000:00:17.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767898 kernel: pci 0000:00:17.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.767948 kernel: pci 0000:00:17.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.767996 kernel: pci 0000:00:17.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768047 kernel: pci 0000:00:17.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768095 kernel: pci 0000:00:17.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768143 kernel: pci 0000:00:17.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768192 kernel: pci 0000:00:16.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768240 kernel: pci 0000:00:16.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768289 kernel: pci 0000:00:16.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768337 kernel: pci 0000:00:16.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768385 kernel: pci 0000:00:16.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768433 kernel: pci 0000:00:16.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768481 kernel: pci 0000:00:16.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768532 kernel: pci 0000:00:16.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768580 kernel: pci 0000:00:16.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768628 kernel: pci 0000:00:16.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768696 kernel: pci 0000:00:15.7: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768746 kernel: pci 0000:00:15.7: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768794 kernel: pci 0000:00:15.6: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768842 kernel: pci 0000:00:15.6: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768890 kernel: pci 0000:00:15.5: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.768938 kernel: pci 0000:00:15.5: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.768990 kernel: pci 0000:00:15.4: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.769038 kernel: pci 0000:00:15.4: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.769086 kernel: pci 0000:00:15.3: BAR 13: no space for [io size 0x1000] Jan 29 11:26:40.769135 kernel: pci 0000:00:15.3: BAR 13: failed to assign [io size 0x1000] Jan 29 11:26:40.769183 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 29 11:26:40.769232 kernel: pci 0000:00:11.0: PCI bridge to [bus 02] Jan 29 11:26:40.769302 kernel: pci 0000:00:11.0: bridge window [io 0x2000-0x3fff] Jan 29 11:26:40.769367 kernel: pci 0000:00:11.0: bridge window [mem 0xfd600000-0xfdffffff] Jan 29 11:26:40.769415 kernel: pci 0000:00:11.0: bridge window [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:26:40.769471 kernel: pci 0000:03:00.0: BAR 6: assigned [mem 0xfd500000-0xfd50ffff pref] Jan 29 11:26:40.769520 kernel: pci 0000:00:15.0: PCI bridge to [bus 03] Jan 29 11:26:40.769568 kernel: pci 0000:00:15.0: bridge window [io 0x4000-0x4fff] Jan 29 11:26:40.769617 kernel: pci 0000:00:15.0: bridge window [mem 0xfd500000-0xfd5fffff] Jan 29 11:26:40.769683 kernel: pci 0000:00:15.0: bridge window [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:26:40.769733 kernel: pci 0000:00:15.1: PCI bridge to [bus 04] Jan 29 11:26:40.769782 kernel: pci 0000:00:15.1: bridge window [io 0x8000-0x8fff] Jan 29 11:26:40.769831 kernel: pci 0000:00:15.1: bridge window [mem 0xfd100000-0xfd1fffff] Jan 29 11:26:40.769883 kernel: pci 0000:00:15.1: bridge window [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:26:40.769933 kernel: pci 0000:00:15.2: PCI bridge to [bus 05] Jan 29 11:26:40.769981 kernel: pci 0000:00:15.2: bridge window [io 0xc000-0xcfff] Jan 29 11:26:40.770031 kernel: pci 0000:00:15.2: bridge window [mem 0xfcd00000-0xfcdfffff] Jan 29 11:26:40.770080 kernel: pci 0000:00:15.2: bridge window [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:26:40.770129 kernel: pci 0000:00:15.3: PCI bridge to [bus 06] Jan 29 11:26:40.770177 kernel: pci 0000:00:15.3: bridge window [mem 0xfc900000-0xfc9fffff] Jan 29 11:26:40.770226 kernel: pci 0000:00:15.3: bridge window [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:26:40.770279 kernel: pci 0000:00:15.4: PCI bridge to [bus 07] Jan 29 11:26:40.770328 kernel: pci 0000:00:15.4: bridge window [mem 0xfc500000-0xfc5fffff] Jan 29 11:26:40.770378 kernel: pci 0000:00:15.4: bridge window [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:26:40.770429 kernel: pci 0000:00:15.5: PCI bridge to [bus 08] Jan 29 11:26:40.770480 kernel: pci 0000:00:15.5: bridge window [mem 0xfc100000-0xfc1fffff] Jan 29 11:26:40.770528 kernel: pci 0000:00:15.5: bridge window [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:26:40.770576 kernel: pci 0000:00:15.6: PCI bridge to [bus 09] Jan 29 11:26:40.770624 kernel: pci 0000:00:15.6: bridge window [mem 0xfbd00000-0xfbdfffff] Jan 29 11:26:40.770713 kernel: pci 0000:00:15.6: bridge window [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:26:40.770764 kernel: pci 0000:00:15.7: PCI bridge to [bus 0a] Jan 29 11:26:40.770812 kernel: pci 0000:00:15.7: bridge window [mem 0xfb900000-0xfb9fffff] Jan 29 11:26:40.770861 kernel: pci 0000:00:15.7: bridge window [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:26:40.770911 kernel: pci 0000:0b:00.0: BAR 6: assigned [mem 0xfd400000-0xfd40ffff pref] Jan 29 11:26:40.770960 kernel: pci 0000:00:16.0: PCI bridge to [bus 0b] Jan 29 11:26:40.771009 kernel: pci 0000:00:16.0: bridge window [io 0x5000-0x5fff] Jan 29 11:26:40.771057 kernel: pci 0000:00:16.0: bridge window [mem 0xfd400000-0xfd4fffff] Jan 29 11:26:40.771105 kernel: pci 0000:00:16.0: bridge window [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:26:40.771156 kernel: pci 0000:00:16.1: PCI bridge to [bus 0c] Jan 29 11:26:40.771205 kernel: pci 0000:00:16.1: bridge window [io 0x9000-0x9fff] Jan 29 11:26:40.771254 kernel: pci 0000:00:16.1: bridge window [mem 0xfd000000-0xfd0fffff] Jan 29 11:26:40.771302 kernel: pci 0000:00:16.1: bridge window [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:26:40.771351 kernel: pci 0000:00:16.2: PCI bridge to [bus 0d] Jan 29 11:26:40.771399 kernel: pci 0000:00:16.2: bridge window [io 0xd000-0xdfff] Jan 29 11:26:40.771446 kernel: pci 0000:00:16.2: bridge window [mem 0xfcc00000-0xfccfffff] Jan 29 11:26:40.771495 kernel: pci 0000:00:16.2: bridge window [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:26:40.771543 kernel: pci 0000:00:16.3: PCI bridge to [bus 0e] Jan 29 11:26:40.771594 kernel: pci 0000:00:16.3: bridge window [mem 0xfc800000-0xfc8fffff] Jan 29 11:26:40.771642 kernel: pci 0000:00:16.3: bridge window [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:26:40.771718 kernel: pci 0000:00:16.4: PCI bridge to [bus 0f] Jan 29 11:26:40.771767 kernel: pci 0000:00:16.4: bridge window [mem 0xfc400000-0xfc4fffff] Jan 29 11:26:40.771814 kernel: pci 0000:00:16.4: bridge window [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:26:40.771862 kernel: pci 0000:00:16.5: PCI bridge to [bus 10] Jan 29 11:26:40.771910 kernel: pci 0000:00:16.5: bridge window [mem 0xfc000000-0xfc0fffff] Jan 29 11:26:40.771958 kernel: pci 0000:00:16.5: bridge window [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:26:40.772006 kernel: pci 0000:00:16.6: PCI bridge to [bus 11] Jan 29 11:26:40.772054 kernel: pci 0000:00:16.6: bridge window [mem 0xfbc00000-0xfbcfffff] Jan 29 11:26:40.772105 kernel: pci 0000:00:16.6: bridge window [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:26:40.772153 kernel: pci 0000:00:16.7: PCI bridge to [bus 12] Jan 29 11:26:40.772202 kernel: pci 0000:00:16.7: bridge window [mem 0xfb800000-0xfb8fffff] Jan 29 11:26:40.772250 kernel: pci 0000:00:16.7: bridge window [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:26:40.772335 kernel: pci 0000:00:17.0: PCI bridge to [bus 13] Jan 29 11:26:40.772383 kernel: pci 0000:00:17.0: bridge window [io 0x6000-0x6fff] Jan 29 11:26:40.772431 kernel: pci 0000:00:17.0: bridge window [mem 0xfd300000-0xfd3fffff] Jan 29 11:26:40.772479 kernel: pci 0000:00:17.0: bridge window [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:26:40.772527 kernel: pci 0000:00:17.1: PCI bridge to [bus 14] Jan 29 11:26:40.772579 kernel: pci 0000:00:17.1: bridge window [io 0xa000-0xafff] Jan 29 11:26:40.772627 kernel: pci 0000:00:17.1: bridge window [mem 0xfcf00000-0xfcffffff] Jan 29 11:26:40.772713 kernel: pci 0000:00:17.1: bridge window [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:26:40.772765 kernel: pci 0000:00:17.2: PCI bridge to [bus 15] Jan 29 11:26:40.772814 kernel: pci 0000:00:17.2: bridge window [io 0xe000-0xefff] Jan 29 11:26:40.772862 kernel: pci 0000:00:17.2: bridge window [mem 0xfcb00000-0xfcbfffff] Jan 29 11:26:40.772910 kernel: pci 0000:00:17.2: bridge window [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:26:40.772958 kernel: pci 0000:00:17.3: PCI bridge to [bus 16] Jan 29 11:26:40.773006 kernel: pci 0000:00:17.3: bridge window [mem 0xfc700000-0xfc7fffff] Jan 29 11:26:40.773055 kernel: pci 0000:00:17.3: bridge window [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:26:40.773107 kernel: pci 0000:00:17.4: PCI bridge to [bus 17] Jan 29 11:26:40.773156 kernel: pci 0000:00:17.4: bridge window [mem 0xfc300000-0xfc3fffff] Jan 29 11:26:40.773205 kernel: pci 0000:00:17.4: bridge window [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:26:40.773253 kernel: pci 0000:00:17.5: PCI bridge to [bus 18] Jan 29 11:26:40.773327 kernel: pci 0000:00:17.5: bridge window [mem 0xfbf00000-0xfbffffff] Jan 29 11:26:40.773377 kernel: pci 0000:00:17.5: bridge window [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:26:40.773443 kernel: pci 0000:00:17.6: PCI bridge to [bus 19] Jan 29 11:26:40.773491 kernel: pci 0000:00:17.6: bridge window [mem 0xfbb00000-0xfbbfffff] Jan 29 11:26:40.773540 kernel: pci 0000:00:17.6: bridge window [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:26:40.773590 kernel: pci 0000:00:17.7: PCI bridge to [bus 1a] Jan 29 11:26:40.773639 kernel: pci 0000:00:17.7: bridge window [mem 0xfb700000-0xfb7fffff] Jan 29 11:26:40.773711 kernel: pci 0000:00:17.7: bridge window [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:26:40.773761 kernel: pci 0000:00:18.0: PCI bridge to [bus 1b] Jan 29 11:26:40.773808 kernel: pci 0000:00:18.0: bridge window [io 0x7000-0x7fff] Jan 29 11:26:40.773856 kernel: pci 0000:00:18.0: bridge window [mem 0xfd200000-0xfd2fffff] Jan 29 11:26:40.773904 kernel: pci 0000:00:18.0: bridge window [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:26:40.773961 kernel: pci 0000:00:18.1: PCI bridge to [bus 1c] Jan 29 11:26:40.774010 kernel: pci 0000:00:18.1: bridge window [io 0xb000-0xbfff] Jan 29 11:26:40.774059 kernel: pci 0000:00:18.1: bridge window [mem 0xfce00000-0xfcefffff] Jan 29 11:26:40.774110 kernel: pci 0000:00:18.1: bridge window [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:26:40.774158 kernel: pci 0000:00:18.2: PCI bridge to [bus 1d] Jan 29 11:26:40.774206 kernel: pci 0000:00:18.2: bridge window [mem 0xfca00000-0xfcafffff] Jan 29 11:26:40.774255 kernel: pci 0000:00:18.2: bridge window [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:26:40.775732 kernel: pci 0000:00:18.3: PCI bridge to [bus 1e] Jan 29 11:26:40.775787 kernel: pci 0000:00:18.3: bridge window [mem 0xfc600000-0xfc6fffff] Jan 29 11:26:40.775839 kernel: pci 0000:00:18.3: bridge window [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:26:40.775889 kernel: pci 0000:00:18.4: PCI bridge to [bus 1f] Jan 29 11:26:40.775940 kernel: pci 0000:00:18.4: bridge window [mem 0xfc200000-0xfc2fffff] Jan 29 11:26:40.775993 kernel: pci 0000:00:18.4: bridge window [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:26:40.776042 kernel: pci 0000:00:18.5: PCI bridge to [bus 20] Jan 29 11:26:40.776091 kernel: pci 0000:00:18.5: bridge window [mem 0xfbe00000-0xfbefffff] Jan 29 11:26:40.776139 kernel: pci 0000:00:18.5: bridge window [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:26:40.776189 kernel: pci 0000:00:18.6: PCI bridge to [bus 21] Jan 29 11:26:40.776238 kernel: pci 0000:00:18.6: bridge window [mem 0xfba00000-0xfbafffff] Jan 29 11:26:40.776291 kernel: pci 0000:00:18.6: bridge window [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:26:40.776339 kernel: pci 0000:00:18.7: PCI bridge to [bus 22] Jan 29 11:26:40.776388 kernel: pci 0000:00:18.7: bridge window [mem 0xfb600000-0xfb6fffff] Jan 29 11:26:40.776436 kernel: pci 0000:00:18.7: bridge window [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:26:40.776488 kernel: pci_bus 0000:00: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 11:26:40.776533 kernel: pci_bus 0000:00: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 11:26:40.776576 kernel: pci_bus 0000:00: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 11:26:40.776619 kernel: pci_bus 0000:00: resource 7 [io 0x0000-0x0cf7 window] Jan 29 11:26:40.777272 kernel: pci_bus 0000:00: resource 8 [io 0x0d00-0xfeff window] Jan 29 11:26:40.777328 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x3fff] Jan 29 11:26:40.777376 kernel: pci_bus 0000:02: resource 1 [mem 0xfd600000-0xfdffffff] Jan 29 11:26:40.777425 kernel: pci_bus 0000:02: resource 2 [mem 0xe7b00000-0xe7ffffff 64bit pref] Jan 29 11:26:40.777469 kernel: pci_bus 0000:02: resource 4 [mem 0x000a0000-0x000bffff window] Jan 29 11:26:40.777514 kernel: pci_bus 0000:02: resource 5 [mem 0x000cc000-0x000dbfff window] Jan 29 11:26:40.777559 kernel: pci_bus 0000:02: resource 6 [mem 0xc0000000-0xfebfffff window] Jan 29 11:26:40.777602 kernel: pci_bus 0000:02: resource 7 [io 0x0000-0x0cf7 window] Jan 29 11:26:40.777646 kernel: pci_bus 0000:02: resource 8 [io 0x0d00-0xfeff window] Jan 29 11:26:40.777704 kernel: pci_bus 0000:03: resource 0 [io 0x4000-0x4fff] Jan 29 11:26:40.777750 kernel: pci_bus 0000:03: resource 1 [mem 0xfd500000-0xfd5fffff] Jan 29 11:26:40.777797 kernel: pci_bus 0000:03: resource 2 [mem 0xc0000000-0xc01fffff 64bit pref] Jan 29 11:26:40.777846 kernel: pci_bus 0000:04: resource 0 [io 0x8000-0x8fff] Jan 29 11:26:40.778199 kernel: pci_bus 0000:04: resource 1 [mem 0xfd100000-0xfd1fffff] Jan 29 11:26:40.778249 kernel: pci_bus 0000:04: resource 2 [mem 0xe7800000-0xe78fffff 64bit pref] Jan 29 11:26:40.778301 kernel: pci_bus 0000:05: resource 0 [io 0xc000-0xcfff] Jan 29 11:26:40.778347 kernel: pci_bus 0000:05: resource 1 [mem 0xfcd00000-0xfcdfffff] Jan 29 11:26:40.778395 kernel: pci_bus 0000:05: resource 2 [mem 0xe7400000-0xe74fffff 64bit pref] Jan 29 11:26:40.778445 kernel: pci_bus 0000:06: resource 1 [mem 0xfc900000-0xfc9fffff] Jan 29 11:26:40.778491 kernel: pci_bus 0000:06: resource 2 [mem 0xe7000000-0xe70fffff 64bit pref] Jan 29 11:26:40.778540 kernel: pci_bus 0000:07: resource 1 [mem 0xfc500000-0xfc5fffff] Jan 29 11:26:40.778585 kernel: pci_bus 0000:07: resource 2 [mem 0xe6c00000-0xe6cfffff 64bit pref] Jan 29 11:26:40.778634 kernel: pci_bus 0000:08: resource 1 [mem 0xfc100000-0xfc1fffff] Jan 29 11:26:40.778948 kernel: pci_bus 0000:08: resource 2 [mem 0xe6800000-0xe68fffff 64bit pref] Jan 29 11:26:40.779004 kernel: pci_bus 0000:09: resource 1 [mem 0xfbd00000-0xfbdfffff] Jan 29 11:26:40.779051 kernel: pci_bus 0000:09: resource 2 [mem 0xe6400000-0xe64fffff 64bit pref] Jan 29 11:26:40.779102 kernel: pci_bus 0000:0a: resource 1 [mem 0xfb900000-0xfb9fffff] Jan 29 11:26:40.779156 kernel: pci_bus 0000:0a: resource 2 [mem 0xe6000000-0xe60fffff 64bit pref] Jan 29 11:26:40.779215 kernel: pci_bus 0000:0b: resource 0 [io 0x5000-0x5fff] Jan 29 11:26:40.779290 kernel: pci_bus 0000:0b: resource 1 [mem 0xfd400000-0xfd4fffff] Jan 29 11:26:40.779352 kernel: pci_bus 0000:0b: resource 2 [mem 0xc0200000-0xc03fffff 64bit pref] Jan 29 11:26:40.779401 kernel: pci_bus 0000:0c: resource 0 [io 0x9000-0x9fff] Jan 29 11:26:40.779447 kernel: pci_bus 0000:0c: resource 1 [mem 0xfd000000-0xfd0fffff] Jan 29 11:26:40.779493 kernel: pci_bus 0000:0c: resource 2 [mem 0xe7700000-0xe77fffff 64bit pref] Jan 29 11:26:40.779547 kernel: pci_bus 0000:0d: resource 0 [io 0xd000-0xdfff] Jan 29 11:26:40.779600 kernel: pci_bus 0000:0d: resource 1 [mem 0xfcc00000-0xfccfffff] Jan 29 11:26:40.780723 kernel: pci_bus 0000:0d: resource 2 [mem 0xe7300000-0xe73fffff 64bit pref] Jan 29 11:26:40.780785 kernel: pci_bus 0000:0e: resource 1 [mem 0xfc800000-0xfc8fffff] Jan 29 11:26:40.780833 kernel: pci_bus 0000:0e: resource 2 [mem 0xe6f00000-0xe6ffffff 64bit pref] Jan 29 11:26:40.780883 kernel: pci_bus 0000:0f: resource 1 [mem 0xfc400000-0xfc4fffff] Jan 29 11:26:40.780929 kernel: pci_bus 0000:0f: resource 2 [mem 0xe6b00000-0xe6bfffff 64bit pref] Jan 29 11:26:40.780978 kernel: pci_bus 0000:10: resource 1 [mem 0xfc000000-0xfc0fffff] Jan 29 11:26:40.781027 kernel: pci_bus 0000:10: resource 2 [mem 0xe6700000-0xe67fffff 64bit pref] Jan 29 11:26:40.781076 kernel: pci_bus 0000:11: resource 1 [mem 0xfbc00000-0xfbcfffff] Jan 29 11:26:40.781122 kernel: pci_bus 0000:11: resource 2 [mem 0xe6300000-0xe63fffff 64bit pref] Jan 29 11:26:40.781171 kernel: pci_bus 0000:12: resource 1 [mem 0xfb800000-0xfb8fffff] Jan 29 11:26:40.781217 kernel: pci_bus 0000:12: resource 2 [mem 0xe5f00000-0xe5ffffff 64bit pref] Jan 29 11:26:40.781288 kernel: pci_bus 0000:13: resource 0 [io 0x6000-0x6fff] Jan 29 11:26:40.781351 kernel: pci_bus 0000:13: resource 1 [mem 0xfd300000-0xfd3fffff] Jan 29 11:26:40.781399 kernel: pci_bus 0000:13: resource 2 [mem 0xe7a00000-0xe7afffff 64bit pref] Jan 29 11:26:40.781728 kernel: pci_bus 0000:14: resource 0 [io 0xa000-0xafff] Jan 29 11:26:40.781779 kernel: pci_bus 0000:14: resource 1 [mem 0xfcf00000-0xfcffffff] Jan 29 11:26:40.781825 kernel: pci_bus 0000:14: resource 2 [mem 0xe7600000-0xe76fffff 64bit pref] Jan 29 11:26:40.781874 kernel: pci_bus 0000:15: resource 0 [io 0xe000-0xefff] Jan 29 11:26:40.781920 kernel: pci_bus 0000:15: resource 1 [mem 0xfcb00000-0xfcbfffff] Jan 29 11:26:40.781968 kernel: pci_bus 0000:15: resource 2 [mem 0xe7200000-0xe72fffff 64bit pref] Jan 29 11:26:40.782020 kernel: pci_bus 0000:16: resource 1 [mem 0xfc700000-0xfc7fffff] Jan 29 11:26:40.782066 kernel: pci_bus 0000:16: resource 2 [mem 0xe6e00000-0xe6efffff 64bit pref] Jan 29 11:26:40.782114 kernel: pci_bus 0000:17: resource 1 [mem 0xfc300000-0xfc3fffff] Jan 29 11:26:40.782160 kernel: pci_bus 0000:17: resource 2 [mem 0xe6a00000-0xe6afffff 64bit pref] Jan 29 11:26:40.782208 kernel: pci_bus 0000:18: resource 1 [mem 0xfbf00000-0xfbffffff] Jan 29 11:26:40.782254 kernel: pci_bus 0000:18: resource 2 [mem 0xe6600000-0xe66fffff 64bit pref] Jan 29 11:26:40.782307 kernel: pci_bus 0000:19: resource 1 [mem 0xfbb00000-0xfbbfffff] Jan 29 11:26:40.782353 kernel: pci_bus 0000:19: resource 2 [mem 0xe6200000-0xe62fffff 64bit pref] Jan 29 11:26:40.782402 kernel: pci_bus 0000:1a: resource 1 [mem 0xfb700000-0xfb7fffff] Jan 29 11:26:40.782448 kernel: pci_bus 0000:1a: resource 2 [mem 0xe5e00000-0xe5efffff 64bit pref] Jan 29 11:26:40.782500 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 29 11:26:40.782549 kernel: pci_bus 0000:1b: resource 1 [mem 0xfd200000-0xfd2fffff] Jan 29 11:26:40.782595 kernel: pci_bus 0000:1b: resource 2 [mem 0xe7900000-0xe79fffff 64bit pref] Jan 29 11:26:40.782644 kernel: pci_bus 0000:1c: resource 0 [io 0xb000-0xbfff] Jan 29 11:26:40.783737 kernel: pci_bus 0000:1c: resource 1 [mem 0xfce00000-0xfcefffff] Jan 29 11:26:40.783789 kernel: pci_bus 0000:1c: resource 2 [mem 0xe7500000-0xe75fffff 64bit pref] Jan 29 11:26:40.783840 kernel: pci_bus 0000:1d: resource 1 [mem 0xfca00000-0xfcafffff] Jan 29 11:26:40.783886 kernel: pci_bus 0000:1d: resource 2 [mem 0xe7100000-0xe71fffff 64bit pref] Jan 29 11:26:40.783943 kernel: pci_bus 0000:1e: resource 1 [mem 0xfc600000-0xfc6fffff] Jan 29 11:26:40.783990 kernel: pci_bus 0000:1e: resource 2 [mem 0xe6d00000-0xe6dfffff 64bit pref] Jan 29 11:26:40.784040 kernel: pci_bus 0000:1f: resource 1 [mem 0xfc200000-0xfc2fffff] Jan 29 11:26:40.784086 kernel: pci_bus 0000:1f: resource 2 [mem 0xe6900000-0xe69fffff 64bit pref] Jan 29 11:26:40.784136 kernel: pci_bus 0000:20: resource 1 [mem 0xfbe00000-0xfbefffff] Jan 29 11:26:40.784182 kernel: pci_bus 0000:20: resource 2 [mem 0xe6500000-0xe65fffff 64bit pref] Jan 29 11:26:40.784233 kernel: pci_bus 0000:21: resource 1 [mem 0xfba00000-0xfbafffff] Jan 29 11:26:40.784299 kernel: pci_bus 0000:21: resource 2 [mem 0xe6100000-0xe61fffff 64bit pref] Jan 29 11:26:40.784349 kernel: pci_bus 0000:22: resource 1 [mem 0xfb600000-0xfb6fffff] Jan 29 11:26:40.784395 kernel: pci_bus 0000:22: resource 2 [mem 0xe5d00000-0xe5dfffff 64bit pref] Jan 29 11:26:40.784451 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:26:40.784461 kernel: PCI: CLS 32 bytes, default 64 Jan 29 11:26:40.784468 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:26:40.784477 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x311fd3cd494, max_idle_ns: 440795223879 ns Jan 29 11:26:40.784483 kernel: clocksource: Switched to clocksource tsc Jan 29 11:26:40.784489 kernel: Initialise system trusted keyrings Jan 29 11:26:40.784496 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:26:40.784502 kernel: Key type asymmetric registered Jan 29 11:26:40.784509 kernel: Asymmetric key parser 'x509' registered Jan 29 11:26:40.784515 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:26:40.784522 kernel: io scheduler mq-deadline registered Jan 29 11:26:40.784528 kernel: io scheduler kyber registered Jan 29 11:26:40.784537 kernel: io scheduler bfq registered Jan 29 11:26:40.784589 kernel: pcieport 0000:00:15.0: PME: Signaling with IRQ 24 Jan 29 11:26:40.784641 kernel: pcieport 0000:00:15.0: pciehp: Slot #160 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.784702 kernel: pcieport 0000:00:15.1: PME: Signaling with IRQ 25 Jan 29 11:26:40.784754 kernel: pcieport 0000:00:15.1: pciehp: Slot #161 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.784805 kernel: pcieport 0000:00:15.2: PME: Signaling with IRQ 26 Jan 29 11:26:40.784859 kernel: pcieport 0000:00:15.2: pciehp: Slot #162 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.785732 kernel: pcieport 0000:00:15.3: PME: Signaling with IRQ 27 Jan 29 11:26:40.785794 kernel: pcieport 0000:00:15.3: pciehp: Slot #163 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.785849 kernel: pcieport 0000:00:15.4: PME: Signaling with IRQ 28 Jan 29 11:26:40.785917 kernel: pcieport 0000:00:15.4: pciehp: Slot #164 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.785969 kernel: pcieport 0000:00:15.5: PME: Signaling with IRQ 29 Jan 29 11:26:40.786019 kernel: pcieport 0000:00:15.5: pciehp: Slot #165 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786072 kernel: pcieport 0000:00:15.6: PME: Signaling with IRQ 30 Jan 29 11:26:40.786123 kernel: pcieport 0000:00:15.6: pciehp: Slot #166 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786174 kernel: pcieport 0000:00:15.7: PME: Signaling with IRQ 31 Jan 29 11:26:40.786225 kernel: pcieport 0000:00:15.7: pciehp: Slot #167 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786281 kernel: pcieport 0000:00:16.0: PME: Signaling with IRQ 32 Jan 29 11:26:40.786334 kernel: pcieport 0000:00:16.0: pciehp: Slot #192 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786384 kernel: pcieport 0000:00:16.1: PME: Signaling with IRQ 33 Jan 29 11:26:40.786434 kernel: pcieport 0000:00:16.1: pciehp: Slot #193 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786485 kernel: pcieport 0000:00:16.2: PME: Signaling with IRQ 34 Jan 29 11:26:40.786535 kernel: pcieport 0000:00:16.2: pciehp: Slot #194 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786584 kernel: pcieport 0000:00:16.3: PME: Signaling with IRQ 35 Jan 29 11:26:40.786635 kernel: pcieport 0000:00:16.3: pciehp: Slot #195 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786740 kernel: pcieport 0000:00:16.4: PME: Signaling with IRQ 36 Jan 29 11:26:40.786792 kernel: pcieport 0000:00:16.4: pciehp: Slot #196 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786843 kernel: pcieport 0000:00:16.5: PME: Signaling with IRQ 37 Jan 29 11:26:40.786893 kernel: pcieport 0000:00:16.5: pciehp: Slot #197 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.786952 kernel: pcieport 0000:00:16.6: PME: Signaling with IRQ 38 Jan 29 11:26:40.787006 kernel: pcieport 0000:00:16.6: pciehp: Slot #198 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787058 kernel: pcieport 0000:00:16.7: PME: Signaling with IRQ 39 Jan 29 11:26:40.787109 kernel: pcieport 0000:00:16.7: pciehp: Slot #199 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787160 kernel: pcieport 0000:00:17.0: PME: Signaling with IRQ 40 Jan 29 11:26:40.787211 kernel: pcieport 0000:00:17.0: pciehp: Slot #224 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787286 kernel: pcieport 0000:00:17.1: PME: Signaling with IRQ 41 Jan 29 11:26:40.787355 kernel: pcieport 0000:00:17.1: pciehp: Slot #225 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787408 kernel: pcieport 0000:00:17.2: PME: Signaling with IRQ 42 Jan 29 11:26:40.787458 kernel: pcieport 0000:00:17.2: pciehp: Slot #226 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787509 kernel: pcieport 0000:00:17.3: PME: Signaling with IRQ 43 Jan 29 11:26:40.787559 kernel: pcieport 0000:00:17.3: pciehp: Slot #227 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787610 kernel: pcieport 0000:00:17.4: PME: Signaling with IRQ 44 Jan 29 11:26:40.787691 kernel: pcieport 0000:00:17.4: pciehp: Slot #228 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787750 kernel: pcieport 0000:00:17.5: PME: Signaling with IRQ 45 Jan 29 11:26:40.787800 kernel: pcieport 0000:00:17.5: pciehp: Slot #229 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787851 kernel: pcieport 0000:00:17.6: PME: Signaling with IRQ 46 Jan 29 11:26:40.787901 kernel: pcieport 0000:00:17.6: pciehp: Slot #230 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.787951 kernel: pcieport 0000:00:17.7: PME: Signaling with IRQ 47 Jan 29 11:26:40.788003 kernel: pcieport 0000:00:17.7: pciehp: Slot #231 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788054 kernel: pcieport 0000:00:18.0: PME: Signaling with IRQ 48 Jan 29 11:26:40.788104 kernel: pcieport 0000:00:18.0: pciehp: Slot #256 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788155 kernel: pcieport 0000:00:18.1: PME: Signaling with IRQ 49 Jan 29 11:26:40.788204 kernel: pcieport 0000:00:18.1: pciehp: Slot #257 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788254 kernel: pcieport 0000:00:18.2: PME: Signaling with IRQ 50 Jan 29 11:26:40.788306 kernel: pcieport 0000:00:18.2: pciehp: Slot #258 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788356 kernel: pcieport 0000:00:18.3: PME: Signaling with IRQ 51 Jan 29 11:26:40.788405 kernel: pcieport 0000:00:18.3: pciehp: Slot #259 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788455 kernel: pcieport 0000:00:18.4: PME: Signaling with IRQ 52 Jan 29 11:26:40.788505 kernel: pcieport 0000:00:18.4: pciehp: Slot #260 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788555 kernel: pcieport 0000:00:18.5: PME: Signaling with IRQ 53 Jan 29 11:26:40.788608 kernel: pcieport 0000:00:18.5: pciehp: Slot #261 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788671 kernel: pcieport 0000:00:18.6: PME: Signaling with IRQ 54 Jan 29 11:26:40.788722 kernel: pcieport 0000:00:18.6: pciehp: Slot #262 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788772 kernel: pcieport 0000:00:18.7: PME: Signaling with IRQ 55 Jan 29 11:26:40.788821 kernel: pcieport 0000:00:18.7: pciehp: Slot #263 AttnBtn+ PwrCtrl+ MRL- AttnInd- PwrInd- HotPlug+ Surprise- Interlock- NoCompl+ IbPresDis- LLActRep+ Jan 29 11:26:40.788833 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:26:40.788840 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:26:40.788846 kernel: 00:05: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:26:40.788852 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBC,PNP0f13:MOUS] at 0x60,0x64 irq 1,12 Jan 29 11:26:40.788859 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:26:40.788865 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:26:40.788915 kernel: rtc_cmos 00:01: registered as rtc0 Jan 29 11:26:40.788962 kernel: rtc_cmos 00:01: setting system clock to 2025-01-29T11:26:40 UTC (1738150000) Jan 29 11:26:40.789010 kernel: rtc_cmos 00:01: alarms up to one month, y3k, 114 bytes nvram Jan 29 11:26:40.789019 kernel: intel_pstate: CPU model not supported Jan 29 11:26:40.789025 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:26:40.789032 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:26:40.789038 kernel: Segment Routing with IPv6 Jan 29 11:26:40.789044 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:26:40.789050 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:26:40.789057 kernel: Key type dns_resolver registered Jan 29 11:26:40.789063 kernel: IPI shorthand broadcast: enabled Jan 29 11:26:40.789071 kernel: sched_clock: Marking stable (876003630, 221215751)->(1149471547, -52252166) Jan 29 11:26:40.789077 kernel: registered taskstats version 1 Jan 29 11:26:40.789083 kernel: Loading compiled-in X.509 certificates Jan 29 11:26:40.789089 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: de92a621108c58f5771c86c5c3ccb1aa0728ed55' Jan 29 11:26:40.789095 kernel: Key type .fscrypt registered Jan 29 11:26:40.789101 kernel: Key type fscrypt-provisioning registered Jan 29 11:26:40.789107 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:26:40.789114 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:26:40.789121 kernel: ima: No architecture policies found Jan 29 11:26:40.789127 kernel: clk: Disabling unused clocks Jan 29 11:26:40.789133 kernel: Freeing unused kernel image (initmem) memory: 42972K Jan 29 11:26:40.789140 kernel: Write protecting the kernel read-only data: 36864k Jan 29 11:26:40.789146 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Jan 29 11:26:40.789152 kernel: Run /init as init process Jan 29 11:26:40.789158 kernel: with arguments: Jan 29 11:26:40.789165 kernel: /init Jan 29 11:26:40.789171 kernel: with environment: Jan 29 11:26:40.789178 kernel: HOME=/ Jan 29 11:26:40.789184 kernel: TERM=linux Jan 29 11:26:40.789190 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:26:40.789197 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:26:40.789205 systemd[1]: Detected virtualization vmware. Jan 29 11:26:40.789212 systemd[1]: Detected architecture x86-64. Jan 29 11:26:40.789218 systemd[1]: Running in initrd. Jan 29 11:26:40.789224 systemd[1]: No hostname configured, using default hostname. Jan 29 11:26:40.789232 systemd[1]: Hostname set to . Jan 29 11:26:40.789239 systemd[1]: Initializing machine ID from random generator. Jan 29 11:26:40.789245 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:26:40.789251 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:26:40.789257 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:26:40.789286 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:26:40.789294 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:26:40.789301 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:26:40.789324 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:26:40.789331 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:26:40.789338 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:26:40.789344 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:26:40.789350 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:26:40.789356 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:26:40.789363 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:26:40.789370 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:26:40.789377 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:26:40.789383 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:26:40.789390 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:26:40.789396 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:26:40.789402 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:26:40.789408 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:26:40.789415 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:26:40.789422 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:26:40.789428 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:26:40.789435 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:26:40.789441 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:26:40.789448 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:26:40.789454 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:26:40.789460 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:26:40.789467 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:26:40.789484 systemd-journald[217]: Collecting audit messages is disabled. Jan 29 11:26:40.789502 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:26:40.789508 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:26:40.789515 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:26:40.789521 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:26:40.789529 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:26:40.789536 kernel: Bridge firewalling registered Jan 29 11:26:40.789542 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:26:40.789549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:26:40.789557 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:40.789563 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:26:40.789570 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:26:40.789576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:26:40.789582 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:26:40.789589 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:26:40.789595 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:26:40.789602 systemd-journald[217]: Journal started Jan 29 11:26:40.789617 systemd-journald[217]: Runtime Journal (/run/log/journal/060a5e949f6e4fdc803f7906ee631692) is 4.8M, max 38.7M, 33.8M free. Jan 29 11:26:40.732929 systemd-modules-load[218]: Inserted module 'overlay' Jan 29 11:26:40.790798 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:26:40.755946 systemd-modules-load[218]: Inserted module 'br_netfilter' Jan 29 11:26:40.795770 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:26:40.795951 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:26:40.798735 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:26:40.800758 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:26:40.805720 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:26:40.808772 dracut-cmdline[249]: dracut-dracut-053 Jan 29 11:26:40.810277 dracut-cmdline[249]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=vmware flatcar.autologin verity.usrhash=519b8fded83181f8e61f734d5291f916d7548bfba9487c78bcb50d002d81719d Jan 29 11:26:40.823152 systemd-resolved[253]: Positive Trust Anchors: Jan 29 11:26:40.823159 systemd-resolved[253]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:26:40.823180 systemd-resolved[253]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:26:40.825820 systemd-resolved[253]: Defaulting to hostname 'linux'. Jan 29 11:26:40.826355 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:26:40.826501 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:26:40.853673 kernel: SCSI subsystem initialized Jan 29 11:26:40.860667 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:26:40.866668 kernel: iscsi: registered transport (tcp) Jan 29 11:26:40.879667 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:26:40.879690 kernel: QLogic iSCSI HBA Driver Jan 29 11:26:40.899441 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:26:40.902764 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:26:40.917678 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:26:40.917708 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:26:40.917717 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:26:40.951717 kernel: raid6: avx2x4 gen() 52145 MB/s Jan 29 11:26:40.966668 kernel: raid6: avx2x2 gen() 53834 MB/s Jan 29 11:26:40.983870 kernel: raid6: avx2x1 gen() 45208 MB/s Jan 29 11:26:40.983890 kernel: raid6: using algorithm avx2x2 gen() 53834 MB/s Jan 29 11:26:41.001860 kernel: raid6: .... xor() 31631 MB/s, rmw enabled Jan 29 11:26:41.001882 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:26:41.015666 kernel: xor: automatically using best checksumming function avx Jan 29 11:26:41.116681 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:26:41.122367 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:26:41.126754 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:26:41.134292 systemd-udevd[435]: Using default interface naming scheme 'v255'. Jan 29 11:26:41.136841 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:26:41.141787 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:26:41.149156 dracut-pre-trigger[436]: rd.md=0: removing MD RAID activation Jan 29 11:26:41.164599 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:26:41.168910 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:26:41.240845 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:26:41.244759 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:26:41.253093 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:26:41.253549 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:26:41.253760 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:26:41.254702 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:26:41.259848 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:26:41.267274 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:26:41.310685 kernel: VMware vmxnet3 virtual NIC driver - version 1.7.0.0-k-NAPI Jan 29 11:26:41.315764 kernel: VMware PVSCSI driver - version 1.0.7.0-k Jan 29 11:26:41.315787 kernel: vmxnet3 0000:0b:00.0: # of Tx queues : 2, # of Rx queues : 2 Jan 29 11:26:41.336245 kernel: vmw_pvscsi: using 64bit dma Jan 29 11:26:41.336266 kernel: vmw_pvscsi: max_id: 16 Jan 29 11:26:41.336279 kernel: vmw_pvscsi: setting ring_pages to 8 Jan 29 11:26:41.336287 kernel: vmw_pvscsi: enabling reqCallThreshold Jan 29 11:26:41.336295 kernel: vmw_pvscsi: driver-based request coalescing enabled Jan 29 11:26:41.336302 kernel: vmw_pvscsi: using MSI-X Jan 29 11:26:41.336309 kernel: vmxnet3 0000:0b:00.0 eth0: NIC Link is Up 10000 Mbps Jan 29 11:26:41.336401 kernel: scsi host0: VMware PVSCSI storage adapter rev 2, req/cmp/msg rings: 8/8/1 pages, cmd_per_lun=254 Jan 29 11:26:41.336481 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:26:41.336489 kernel: vmw_pvscsi 0000:03:00.0: VMware PVSCSI rev 2 host #0 Jan 29 11:26:41.336563 kernel: scsi 0:0:0:0: Direct-Access VMware Virtual disk 2.0 PQ: 0 ANSI: 6 Jan 29 11:26:41.337136 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:26:41.337379 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:26:41.337732 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:26:41.338008 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:26:41.338217 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:41.338474 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:26:41.343682 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:26:41.343738 kernel: AES CTR mode by8 optimization enabled Jan 29 11:26:41.344673 kernel: libata version 3.00 loaded. Jan 29 11:26:41.345824 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:26:41.348690 kernel: ata_piix 0000:00:07.1: version 2.13 Jan 29 11:26:41.353860 kernel: scsi host1: ata_piix Jan 29 11:26:41.353940 kernel: vmxnet3 0000:0b:00.0 ens192: renamed from eth0 Jan 29 11:26:41.354013 kernel: scsi host2: ata_piix Jan 29 11:26:41.354082 kernel: ata1: PATA max UDMA/33 cmd 0x1f0 ctl 0x3f6 bmdma 0x1060 irq 14 Jan 29 11:26:41.354091 kernel: ata2: PATA max UDMA/33 cmd 0x170 ctl 0x376 bmdma 0x1068 irq 15 Jan 29 11:26:41.361283 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:41.365797 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:26:41.378287 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:26:41.524674 kernel: ata2.00: ATAPI: VMware Virtual IDE CDROM Drive, 00000001, max UDMA/33 Jan 29 11:26:41.530688 kernel: scsi 2:0:0:0: CD-ROM NECVMWar VMware IDE CDR10 1.00 PQ: 0 ANSI: 5 Jan 29 11:26:41.541767 kernel: sd 0:0:0:0: [sda] 17805312 512-byte logical blocks: (9.12 GB/8.49 GiB) Jan 29 11:26:41.547331 kernel: sd 0:0:0:0: [sda] Write Protect is off Jan 29 11:26:41.547420 kernel: sd 0:0:0:0: [sda] Mode Sense: 31 00 00 00 Jan 29 11:26:41.547488 kernel: sd 0:0:0:0: [sda] Cache data unavailable Jan 29 11:26:41.547550 kernel: sd 0:0:0:0: [sda] Assuming drive cache: write through Jan 29 11:26:41.547609 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:26:41.547619 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jan 29 11:26:41.557695 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 1x/1x writer dvd-ram cd/rw xa/form2 cdda tray Jan 29 11:26:41.569604 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 29 11:26:41.569628 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Jan 29 11:26:41.573669 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (490) Jan 29 11:26:41.575515 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_disk EFI-SYSTEM. Jan 29 11:26:41.579577 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_disk ROOT. Jan 29 11:26:41.579762 kernel: BTRFS: device fsid 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 devid 1 transid 38 /dev/sda3 scanned by (udev-worker) (487) Jan 29 11:26:41.582562 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 29 11:26:41.584980 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_disk USR-A. Jan 29 11:26:41.585264 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_disk USR-A. Jan 29 11:26:41.592841 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:26:41.617676 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:26:42.625701 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 29 11:26:42.625750 disk-uuid[587]: The operation has completed successfully. Jan 29 11:26:42.664404 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:26:42.664476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:26:42.669761 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:26:42.674581 sh[605]: Success Jan 29 11:26:42.690670 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:26:42.770860 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:26:42.780591 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:26:42.780855 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:26:42.818431 kernel: BTRFS info (device dm-0): first mount of filesystem 5ba3c9ea-61f2-4fe6-a507-2966757f6d44 Jan 29 11:26:42.818491 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:26:42.818502 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:26:42.819525 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:26:42.820332 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:26:42.827687 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 29 11:26:42.830318 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:26:42.834800 systemd[1]: Starting afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments... Jan 29 11:26:42.835951 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:26:42.884846 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:42.884889 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:26:42.886672 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:26:42.894566 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:26:42.898877 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:26:42.900723 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:42.906565 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:26:42.915970 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:26:42.936052 systemd[1]: Finished afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 11:26:42.940777 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:26:43.022211 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:26:43.022895 ignition[665]: Ignition 2.20.0 Jan 29 11:26:43.022899 ignition[665]: Stage: fetch-offline Jan 29 11:26:43.022918 ignition[665]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.022923 ignition[665]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.022973 ignition[665]: parsed url from cmdline: "" Jan 29 11:26:43.022975 ignition[665]: no config URL provided Jan 29 11:26:43.022978 ignition[665]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:26:43.022982 ignition[665]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:26:43.023355 ignition[665]: config successfully fetched Jan 29 11:26:43.023371 ignition[665]: parsing config with SHA512: 2097afec7ba2fdd3c665c36c830442456cfb5eb53856f569f4c0ebfe44702751146a6195cd979dcf7d0e5af4b72e3a8e518c588efc9f8ac5392578edc6ca29b4 Jan 29 11:26:43.025894 unknown[665]: fetched base config from "system" Jan 29 11:26:43.026112 ignition[665]: fetch-offline: fetch-offline passed Jan 29 11:26:43.025900 unknown[665]: fetched user config from "vmware" Jan 29 11:26:43.026152 ignition[665]: Ignition finished successfully Jan 29 11:26:43.026785 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:26:43.027196 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:26:43.041759 systemd-networkd[797]: lo: Link UP Jan 29 11:26:43.041766 systemd-networkd[797]: lo: Gained carrier Jan 29 11:26:43.042483 systemd-networkd[797]: Enumeration completed Jan 29 11:26:43.042556 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:26:43.042739 systemd[1]: Reached target network.target - Network. Jan 29 11:26:43.042810 systemd-networkd[797]: ens192: Configuring with /etc/systemd/network/10-dracut-cmdline-99.network. Jan 29 11:26:43.046489 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 29 11:26:43.046611 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 29 11:26:43.042837 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Jan 29 11:26:43.046275 systemd-networkd[797]: ens192: Link UP Jan 29 11:26:43.046277 systemd-networkd[797]: ens192: Gained carrier Jan 29 11:26:43.047803 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:26:43.057092 ignition[800]: Ignition 2.20.0 Jan 29 11:26:43.057103 ignition[800]: Stage: kargs Jan 29 11:26:43.057227 ignition[800]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.057234 ignition[800]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.057817 ignition[800]: kargs: kargs passed Jan 29 11:26:43.057843 ignition[800]: Ignition finished successfully Jan 29 11:26:43.059030 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:26:43.064832 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:26:43.071839 ignition[807]: Ignition 2.20.0 Jan 29 11:26:43.071846 ignition[807]: Stage: disks Jan 29 11:26:43.071950 ignition[807]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.071956 ignition[807]: no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.072452 ignition[807]: disks: disks passed Jan 29 11:26:43.072486 ignition[807]: Ignition finished successfully Jan 29 11:26:43.073208 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:26:43.073410 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:26:43.073508 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:26:43.073609 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:26:43.073704 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:26:43.073889 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:26:43.079834 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:26:43.103235 systemd-fsck[815]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Jan 29 11:26:43.104234 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:26:43.108798 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:26:43.170573 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:26:43.170831 kernel: EXT4-fs (sda9): mounted filesystem 2fbf9359-701e-4995-b3f7-74280bd2b1c9 r/w with ordered data mode. Quota mode: none. Jan 29 11:26:43.171046 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:26:43.175736 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:26:43.176715 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:26:43.177850 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 29 11:26:43.177918 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:26:43.177934 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:26:43.182053 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:26:43.184771 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:26:43.186640 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (823) Jan 29 11:26:43.186706 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:43.186717 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:26:43.187291 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:26:43.190680 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:26:43.191431 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:26:43.242186 initrd-setup-root[847]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:26:43.245055 initrd-setup-root[854]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:26:43.247640 initrd-setup-root[861]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:26:43.250086 initrd-setup-root[868]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:26:43.332781 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:26:43.340781 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:26:43.343187 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:26:43.346667 kernel: BTRFS info (device sda6): last unmount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:43.362669 ignition[935]: INFO : Ignition 2.20.0 Jan 29 11:26:43.362669 ignition[935]: INFO : Stage: mount Jan 29 11:26:43.362669 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.362669 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.364390 ignition[935]: INFO : mount: mount passed Jan 29 11:26:43.364390 ignition[935]: INFO : Ignition finished successfully Jan 29 11:26:43.365125 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:26:43.369746 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:26:43.369980 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:26:43.816802 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:26:43.821761 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:26:43.829704 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (948) Jan 29 11:26:43.832104 kernel: BTRFS info (device sda6): first mount of filesystem 46e45d4d-e07d-4ebc-bafb-221646b0ed58 Jan 29 11:26:43.832124 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:26:43.832132 kernel: BTRFS info (device sda6): using free space tree Jan 29 11:26:43.835669 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 29 11:26:43.836956 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:26:43.854525 ignition[965]: INFO : Ignition 2.20.0 Jan 29 11:26:43.854829 ignition[965]: INFO : Stage: files Jan 29 11:26:43.855220 ignition[965]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:43.855359 ignition[965]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:43.856283 ignition[965]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:26:43.857075 ignition[965]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:26:43.857075 ignition[965]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:26:43.859248 ignition[965]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:26:43.859384 ignition[965]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:26:43.859521 ignition[965]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:26:43.859465 unknown[965]: wrote ssh authorized keys file for user: core Jan 29 11:26:43.872798 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:26:43.872964 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Jan 29 11:26:43.908189 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 29 11:26:44.000303 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 29 11:26:44.000536 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:26:44.001259 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 29 11:26:44.001259 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:26:44.001259 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 29 11:26:44.003965 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:26:44.004113 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:26:44.004113 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:26:44.004453 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:26:44.004453 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:26:44.004453 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Jan 29 11:26:44.370352 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 29 11:26:44.570136 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Jan 29 11:26:44.570457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 11:26:44.570457 ignition[965]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/network/00-vmware.network" Jan 29 11:26:44.570457 ignition[965]: INFO : files: op(c): [started] processing unit "prepare-helm.service" Jan 29 11:26:44.571017 ignition[965]: INFO : files: op(c): op(d): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(c): op(d): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(c): [finished] processing unit "prepare-helm.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(e): [started] processing unit "coreos-metadata.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(e): op(f): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(e): op(f): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(e): [finished] processing unit "coreos-metadata.service" Jan 29 11:26:44.571215 ignition[965]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Jan 29 11:26:44.616256 ignition[965]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:26:44.619422 ignition[965]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Jan 29 11:26:44.619613 ignition[965]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Jan 29 11:26:44.619613 ignition[965]: INFO : files: op(12): [started] setting preset to enabled for "prepare-helm.service" Jan 29 11:26:44.619613 ignition[965]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-helm.service" Jan 29 11:26:44.620006 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:26:44.620006 ignition[965]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:26:44.620006 ignition[965]: INFO : files: files passed Jan 29 11:26:44.620006 ignition[965]: INFO : Ignition finished successfully Jan 29 11:26:44.620794 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:26:44.624803 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:26:44.626453 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:26:44.627004 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:26:44.627177 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:26:44.633744 initrd-setup-root-after-ignition[996]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:26:44.633744 initrd-setup-root-after-ignition[996]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:26:44.634132 initrd-setup-root-after-ignition[1000]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:26:44.634719 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:26:44.635049 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:26:44.638810 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:26:44.650816 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:26:44.650879 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:26:44.651289 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:26:44.651410 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:26:44.651610 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:26:44.652059 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:26:44.661119 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:26:44.664814 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:26:44.670252 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:26:44.670431 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:26:44.670661 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:26:44.670844 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:26:44.670918 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:26:44.671176 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:26:44.671415 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:26:44.671597 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:26:44.671797 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:26:44.672002 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:26:44.672215 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:26:44.672566 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:26:44.672810 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:26:44.673021 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:26:44.673215 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:26:44.673376 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:26:44.673436 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:26:44.673684 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:26:44.673834 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:26:44.674019 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:26:44.674065 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:26:44.674232 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:26:44.674289 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:26:44.674526 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:26:44.674592 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:26:44.674886 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:26:44.675025 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:26:44.678725 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:26:44.678890 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:26:44.679085 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:26:44.679268 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:26:44.679336 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:26:44.679543 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:26:44.679588 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:26:44.679733 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:26:44.679793 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:26:44.680029 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:26:44.680084 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:26:44.687781 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:26:44.687880 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:26:44.687967 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:26:44.689775 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:26:44.689880 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:26:44.689968 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:26:44.690153 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:26:44.690230 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:26:44.692779 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:26:44.692846 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:26:44.696781 ignition[1020]: INFO : Ignition 2.20.0 Jan 29 11:26:44.702922 ignition[1020]: INFO : Stage: umount Jan 29 11:26:44.702922 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:26:44.702922 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/vmware" Jan 29 11:26:44.702922 ignition[1020]: INFO : umount: umount passed Jan 29 11:26:44.702922 ignition[1020]: INFO : Ignition finished successfully Jan 29 11:26:44.703816 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:26:44.703909 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:26:44.704207 systemd[1]: Stopped target network.target - Network. Jan 29 11:26:44.704312 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:26:44.704348 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:26:44.704481 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:26:44.704509 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:26:44.704646 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:26:44.704691 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:26:44.704815 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:26:44.704836 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:26:44.705057 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:26:44.706701 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:26:44.711721 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:26:44.712828 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:26:44.712895 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:26:44.715189 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:26:44.715267 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:26:44.716622 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:26:44.716765 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:26:44.720760 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:26:44.720863 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:26:44.720898 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:26:44.721148 systemd[1]: afterburn-network-kargs.service: Deactivated successfully. Jan 29 11:26:44.721171 systemd[1]: Stopped afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments. Jan 29 11:26:44.721736 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:26:44.721762 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:26:44.722026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:26:44.722049 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:26:44.722151 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:26:44.722173 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:26:44.722333 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:26:44.730508 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:26:44.730586 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:26:44.737260 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:26:44.737358 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:26:44.737743 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:26:44.737776 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:26:44.737993 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:26:44.738015 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:26:44.738176 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:26:44.738201 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:26:44.738515 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:26:44.738548 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:26:44.738866 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:26:44.738899 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:26:44.743795 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:26:44.743934 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:26:44.743967 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:26:44.744791 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:26:44.744831 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:44.747788 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:26:44.747869 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:26:44.781933 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:26:44.782019 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:26:44.782287 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:26:44.782414 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:26:44.782439 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:26:44.786795 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:26:44.798803 systemd[1]: Switching root. Jan 29 11:26:44.823290 systemd-journald[217]: Journal stopped Jan 29 11:26:46.237546 systemd-journald[217]: Received SIGTERM from PID 1 (systemd). Jan 29 11:26:46.237572 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:26:46.237580 kernel: SELinux: policy capability open_perms=1 Jan 29 11:26:46.237586 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:26:46.237591 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:26:46.237597 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:26:46.237604 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:26:46.237610 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:26:46.237618 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:26:46.237624 kernel: audit: type=1403 audit(1738150005.589:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:26:46.237630 systemd[1]: Successfully loaded SELinux policy in 40.652ms. Jan 29 11:26:46.237637 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.013ms. Jan 29 11:26:46.237645 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:26:46.240176 systemd[1]: Detected virtualization vmware. Jan 29 11:26:46.240196 systemd[1]: Detected architecture x86-64. Jan 29 11:26:46.240204 systemd[1]: Detected first boot. Jan 29 11:26:46.240211 systemd[1]: Initializing machine ID from random generator. Jan 29 11:26:46.240223 zram_generator::config[1064]: No configuration found. Jan 29 11:26:46.240232 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:26:46.240239 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:26:46.240248 systemd[1]: COREOS_CUSTOM_PUBLIC_IPV4=$(ip addr show ens192 | grep -v "inet 10." | grep -Po "inet \K[\d.]+")" > ${OUTPUT}" Jan 29 11:26:46.240256 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:26:46.240262 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:26:46.240271 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:26:46.240279 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:26:46.240287 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:26:46.240293 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:26:46.240300 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:26:46.240307 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:26:46.240314 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:26:46.240320 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:26:46.240329 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:26:46.240335 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:26:46.240342 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:26:46.240352 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:26:46.240359 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:26:46.240366 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:26:46.240373 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:26:46.240379 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:26:46.240388 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:26:46.240395 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:26:46.240403 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:26:46.240410 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:26:46.240418 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:26:46.240424 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:26:46.240431 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:26:46.240439 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:26:46.240449 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:26:46.240456 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:26:46.240464 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:26:46.240471 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:26:46.240479 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:26:46.240488 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:26:46.240496 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:26:46.240503 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:26:46.240511 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:26:46.240520 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:26:46.240528 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:26:46.240535 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:26:46.240542 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:26:46.240550 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:26:46.240558 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:26:46.240565 systemd[1]: Reached target machines.target - Containers. Jan 29 11:26:46.240572 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:26:46.240579 systemd[1]: Starting ignition-delete-config.service - Ignition (delete config)... Jan 29 11:26:46.240586 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:26:46.240595 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:26:46.240602 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:26:46.240611 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:26:46.240619 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:26:46.240626 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:26:46.240633 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:26:46.240641 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:26:46.240647 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:26:46.243288 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:26:46.243308 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:26:46.243317 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:26:46.243327 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:26:46.243335 kernel: fuse: init (API version 7.39) Jan 29 11:26:46.243343 kernel: ACPI: bus type drm_connector registered Jan 29 11:26:46.243349 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:26:46.243358 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:26:46.243382 systemd-journald[1151]: Collecting audit messages is disabled. Jan 29 11:26:46.243401 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:26:46.243410 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:26:46.243417 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:26:46.243424 kernel: loop: module loaded Jan 29 11:26:46.243430 systemd[1]: Stopped verity-setup.service. Jan 29 11:26:46.243438 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:26:46.243447 systemd-journald[1151]: Journal started Jan 29 11:26:46.243463 systemd-journald[1151]: Runtime Journal (/run/log/journal/6e4efc32d3bf4b5c848008a68735560e) is 4.8M, max 38.7M, 33.8M free. Jan 29 11:26:46.247207 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:26:46.042382 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:26:46.076216 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 29 11:26:46.076498 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:26:46.247798 jq[1131]: true Jan 29 11:26:46.248668 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:26:46.249445 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:26:46.249610 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:26:46.249762 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:26:46.249904 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:26:46.250049 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:26:46.250619 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:26:46.250883 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:26:46.250980 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:26:46.251207 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:26:46.251279 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:26:46.251495 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:26:46.251570 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:26:46.251803 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:26:46.251878 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:26:46.252104 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:26:46.252177 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:26:46.252396 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:26:46.252470 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:26:46.259210 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:26:46.266025 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:26:46.269207 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:26:46.273971 jq[1173]: true Jan 29 11:26:46.274820 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:26:46.279539 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:26:46.279672 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:26:46.279700 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:26:46.280465 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:26:46.282769 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:26:46.288785 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:26:46.288986 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:26:46.293878 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:26:46.298784 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:26:46.298930 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:26:46.307032 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:26:46.307360 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:26:46.309776 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:26:46.311065 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:26:46.311884 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:26:46.312923 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:26:46.313076 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:26:46.313839 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:26:46.325316 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:26:46.327756 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:26:46.331790 systemd-journald[1151]: Time spent on flushing to /var/log/journal/6e4efc32d3bf4b5c848008a68735560e is 93.442ms for 1831 entries. Jan 29 11:26:46.331790 systemd-journald[1151]: System Journal (/var/log/journal/6e4efc32d3bf4b5c848008a68735560e) is 8.0M, max 584.8M, 576.8M free. Jan 29 11:26:46.454813 systemd-journald[1151]: Received client request to flush runtime journal. Jan 29 11:26:46.454864 kernel: loop0: detected capacity change from 0 to 138184 Jan 29 11:26:46.340381 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:26:46.340851 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:26:46.347884 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:26:46.390322 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:26:46.401219 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:26:46.401601 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:26:46.408724 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:26:46.414841 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:26:46.421555 udevadm[1215]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 29 11:26:46.455906 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:26:46.475306 ignition[1198]: Ignition 2.20.0 Jan 29 11:26:46.475513 ignition[1198]: deleting config from guestinfo properties Jan 29 11:26:46.536913 ignition[1198]: Successfully deleted config Jan 29 11:26:46.538200 systemd[1]: Finished ignition-delete-config.service - Ignition (delete config). Jan 29 11:26:46.673569 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:26:46.677771 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:26:46.689669 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:26:46.733643 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 29 11:26:46.733957 systemd-tmpfiles[1228]: ACLs are not supported, ignoring. Jan 29 11:26:46.737080 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:26:46.740830 kernel: loop1: detected capacity change from 0 to 140992 Jan 29 11:26:46.983673 kernel: loop2: detected capacity change from 0 to 218376 Jan 29 11:26:47.239739 kernel: loop3: detected capacity change from 0 to 2944 Jan 29 11:26:47.279740 kernel: loop4: detected capacity change from 0 to 138184 Jan 29 11:26:47.409719 kernel: loop5: detected capacity change from 0 to 140992 Jan 29 11:26:47.423064 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:26:47.430505 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:26:47.442934 systemd-udevd[1237]: Using default interface naming scheme 'v255'. Jan 29 11:26:47.453673 kernel: loop6: detected capacity change from 0 to 218376 Jan 29 11:26:47.497856 kernel: loop7: detected capacity change from 0 to 2944 Jan 29 11:26:47.505396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:26:47.511768 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:26:47.533761 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:26:47.534191 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:26:47.566411 (sd-merge)[1235]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-vmware'. Jan 29 11:26:47.566739 (sd-merge)[1235]: Merged extensions into '/usr'. Jan 29 11:26:47.572421 systemd[1]: Reloading requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:26:47.572431 systemd[1]: Reloading... Jan 29 11:26:47.635670 zram_generator::config[1289]: No configuration found. Jan 29 11:26:47.661369 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1248) Jan 29 11:26:47.673464 systemd-networkd[1242]: lo: Link UP Jan 29 11:26:47.673905 systemd-networkd[1242]: lo: Gained carrier Jan 29 11:26:47.675430 systemd-networkd[1242]: Enumeration completed Jan 29 11:26:47.677093 systemd-networkd[1242]: ens192: Configuring with /etc/systemd/network/00-vmware.network. Jan 29 11:26:47.679998 kernel: vmxnet3 0000:0b:00.0 ens192: intr type 3, mode 0, 3 vectors allocated Jan 29 11:26:47.680132 kernel: vmxnet3 0000:0b:00.0 ens192: NIC Link is Up 10000 Mbps Jan 29 11:26:47.681125 systemd-networkd[1242]: ens192: Link UP Jan 29 11:26:47.681264 systemd-networkd[1242]: ens192: Gained carrier Jan 29 11:26:47.683666 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:26:47.688665 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:26:47.752750 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:26:47.774149 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:26:47.789698 kernel: piix4_smbus 0000:00:07.3: SMBus Host Controller not enabled! Jan 29 11:26:47.827708 kernel: input: ImPS/2 Generic Wheel Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:26:47.833692 kernel: vmw_vmci 0000:00:07.7: Using capabilities 0xc Jan 29 11:26:47.841634 kernel: Guest personality initialized and is active Jan 29 11:26:47.835852 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_disk OEM. Jan 29 11:26:47.836065 systemd[1]: Reloading finished in 263 ms. Jan 29 11:26:47.838938 (udev-worker)[1252]: id: Truncating stdout of 'dmi_memory_id' up to 16384 byte. Jan 29 11:26:47.842781 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Jan 29 11:26:47.842820 kernel: Initialized host personality Jan 29 11:26:47.845664 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:26:47.857499 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:26:47.857784 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:26:47.858730 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:26:47.868422 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:26:47.873817 systemd[1]: Starting ensure-sysext.service... Jan 29 11:26:47.874639 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:26:47.876292 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:26:47.877956 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:26:47.878748 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:26:47.881261 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:26:47.885452 systemd[1]: Reloading requested from client PID 1357 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:26:47.885460 systemd[1]: Reloading... Jan 29 11:26:47.904332 lvm[1359]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:26:47.915443 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:26:47.915720 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:26:47.916235 systemd-tmpfiles[1362]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:26:47.916432 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jan 29 11:26:47.916466 systemd-tmpfiles[1362]: ACLs are not supported, ignoring. Jan 29 11:26:47.918488 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:26:47.918494 systemd-tmpfiles[1362]: Skipping /boot Jan 29 11:26:47.924350 systemd-tmpfiles[1362]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:26:47.924354 systemd-tmpfiles[1362]: Skipping /boot Jan 29 11:26:47.936740 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:26:47.949669 zram_generator::config[1398]: No configuration found. Jan 29 11:26:48.011211 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:26:48.029605 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:26:48.068725 systemd[1]: Reloading finished in 183 ms. Jan 29 11:26:48.085255 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:26:48.091274 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:26:48.091670 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:26:48.092010 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:26:48.092321 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:26:48.098634 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:26:48.103916 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:26:48.106887 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:26:48.109149 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:26:48.112199 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:26:48.117755 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:26:48.120881 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:26:48.122335 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:26:48.125751 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:26:48.130903 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:26:48.132903 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:26:48.133100 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:26:48.133172 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:26:48.136252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:26:48.136369 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:26:48.137795 lvm[1464]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:26:48.144555 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:26:48.147926 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:26:48.148108 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:26:48.148189 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:26:48.148689 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:26:48.152863 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:26:48.156891 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:26:48.157084 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:26:48.157170 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:26:48.157530 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:26:48.157740 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:26:48.158236 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:26:48.160838 systemd[1]: Finished ensure-sysext.service. Jan 29 11:26:48.167916 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:26:48.168353 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:26:48.170039 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:26:48.170194 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:26:48.174648 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:26:48.174791 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:26:48.175468 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:26:48.186409 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:26:48.186543 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:26:48.189274 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:26:48.198988 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:26:48.215734 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:26:48.216458 systemd-resolved[1468]: Positive Trust Anchors: Jan 29 11:26:48.216600 systemd-resolved[1468]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:26:48.216670 systemd-resolved[1468]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:26:48.223834 augenrules[1504]: No rules Jan 29 11:26:48.224792 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:26:48.224973 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:26:48.231540 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:26:48.231743 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:26:48.235171 systemd-resolved[1468]: Defaulting to hostname 'linux'. Jan 29 11:26:48.237530 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:26:48.237681 systemd[1]: Reached target network.target - Network. Jan 29 11:26:48.237763 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:26:48.264914 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:26:48.265388 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:26:48.265479 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:26:48.265734 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:26:48.265970 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:26:48.266285 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:26:48.266544 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:26:48.266750 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:26:48.266933 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:26:48.266997 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:26:48.267178 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:26:48.268014 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:26:48.269503 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:26:48.275232 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:26:48.276026 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:26:48.276286 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:26:48.276462 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:26:48.276668 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:26:48.276694 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:26:48.277948 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:26:48.279875 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:26:48.281845 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:26:48.285742 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:26:48.285872 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:26:48.289362 jq[1516]: false Jan 29 11:26:48.289786 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:26:48.292446 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 29 11:26:48.299883 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:26:48.302770 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:26:48.305778 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:26:48.307854 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:26:48.308338 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:26:48.313460 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:26:48.315791 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:26:48.319597 systemd[1]: Starting vgauthd.service - VGAuth Service for open-vm-tools... Jan 29 11:26:48.332948 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:26:48.333192 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:26:48.342686 extend-filesystems[1517]: Found loop4 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found loop5 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found loop6 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found loop7 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found sda Jan 29 11:26:48.342686 extend-filesystems[1517]: Found sda1 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found sda2 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found sda3 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found usr Jan 29 11:26:48.342686 extend-filesystems[1517]: Found sda4 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found sda6 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found sda7 Jan 29 11:26:48.342686 extend-filesystems[1517]: Found sda9 Jan 29 11:26:48.342686 extend-filesystems[1517]: Checking size of /dev/sda9 Jan 29 11:26:48.335437 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:26:48.351086 update_engine[1525]: I20250129 11:26:48.344758 1525 main.cc:92] Flatcar Update Engine starting Jan 29 11:26:48.335715 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:26:48.351266 jq[1528]: true Jan 29 11:26:48.356417 extend-filesystems[1517]: Old size kept for /dev/sda9 Jan 29 11:26:48.356417 extend-filesystems[1517]: Found sr0 Jan 29 11:26:48.359802 systemd[1]: Started vgauthd.service - VGAuth Service for open-vm-tools. Jan 29 11:26:48.360173 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:26:48.360311 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:26:48.364741 systemd[1]: Starting vmtoolsd.service - Service for virtual machines hosted on VMware... Jan 29 11:26:48.367422 (ntainerd)[1543]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:26:48.377802 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:26:48.377935 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:26:48.382346 jq[1546]: true Jan 29 11:26:48.387645 systemd[1]: Started vmtoolsd.service - Service for virtual machines hosted on VMware. Jan 29 11:26:48.391984 unknown[1551]: Pref_Init: Using '/etc/vmware-tools/vgauth.conf' as preferences filepath Jan 29 11:26:48.393108 unknown[1551]: Core dump limit set to -1 Jan 29 11:26:48.405454 dbus-daemon[1515]: [system] SELinux support is enabled Jan 29 11:26:48.405727 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:26:48.407804 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:26:48.407972 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:26:48.408631 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:26:48.408645 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:26:48.414138 kernel: NET: Registered PF_VSOCK protocol family Jan 29 11:26:48.412858 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:26:48.414208 update_engine[1525]: I20250129 11:26:48.413053 1525 update_check_scheduler.cc:74] Next update check in 8m21s Jan 29 11:26:48.419761 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:26:48.421540 tar[1537]: linux-amd64/LICENSE Jan 29 11:26:48.421540 tar[1537]: linux-amd64/helm Jan 29 11:26:48.426223 systemd-logind[1523]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:26:48.426806 systemd-logind[1523]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:26:48.427296 systemd-logind[1523]: New seat seat0. Jan 29 11:26:48.428247 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:26:48.438585 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (1254) Jan 29 11:28:01.685738 systemd-timesyncd[1487]: Contacted time server 208.76.2.12:123 (0.flatcar.pool.ntp.org). Jan 29 11:28:01.685773 systemd-timesyncd[1487]: Initial clock synchronization to Wed 2025-01-29 11:28:01.685643 UTC. Jan 29 11:28:01.685806 systemd-resolved[1468]: Clock change detected. Flushing caches. Jan 29 11:28:01.696861 sshd_keygen[1531]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:28:01.732662 locksmithd[1565]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:28:01.739632 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:28:01.744823 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:28:01.750180 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:28:01.750296 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:28:01.755885 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:28:01.773143 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:28:01.781261 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:28:01.783041 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:28:01.783854 bash[1578]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:28:01.783216 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:28:01.783974 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:28:01.785378 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jan 29 11:28:01.994779 systemd-networkd[1242]: ens192: Gained IPv6LL Jan 29 11:28:02.122757 tar[1537]: linux-amd64/README.md Jan 29 11:28:02.359236 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:28:02.360892 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:28:02.369056 systemd[1]: Starting coreos-metadata.service - VMware metadata agent... Jan 29 11:28:02.371215 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:28:02.373803 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:28:02.391610 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 29 11:28:02.410369 containerd[1543]: time="2025-01-29T11:28:02.410332614Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:28:02.430795 containerd[1543]: time="2025-01-29T11:28:02.430765909Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:28:02.431652 containerd[1543]: time="2025-01-29T11:28:02.431634673Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:28:02.431697 containerd[1543]: time="2025-01-29T11:28:02.431689386Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:28:02.431732 containerd[1543]: time="2025-01-29T11:28:02.431724883Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:28:02.431843 containerd[1543]: time="2025-01-29T11:28:02.431833812Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:28:02.431884 containerd[1543]: time="2025-01-29T11:28:02.431876741Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:28:02.431948 containerd[1543]: time="2025-01-29T11:28:02.431938968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:28:02.431978 containerd[1543]: time="2025-01-29T11:28:02.431972183Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:28:02.432096 containerd[1543]: time="2025-01-29T11:28:02.432086455Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:28:02.432127 containerd[1543]: time="2025-01-29T11:28:02.432120835Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:28:02.432159 containerd[1543]: time="2025-01-29T11:28:02.432152032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:28:02.432191 containerd[1543]: time="2025-01-29T11:28:02.432185185Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:28:02.432266 containerd[1543]: time="2025-01-29T11:28:02.432258449Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:28:02.432410 containerd[1543]: time="2025-01-29T11:28:02.432402301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:28:02.432491 containerd[1543]: time="2025-01-29T11:28:02.432482149Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:28:02.432521 containerd[1543]: time="2025-01-29T11:28:02.432515742Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:28:02.432587 containerd[1543]: time="2025-01-29T11:28:02.432579619Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:28:02.432665 containerd[1543]: time="2025-01-29T11:28:02.432655766Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:28:02.436115 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:28:02.438344 systemd[1]: coreos-metadata.service: Deactivated successfully. Jan 29 11:28:02.438485 systemd[1]: Finished coreos-metadata.service - VMware metadata agent. Jan 29 11:28:02.439273 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:28:02.566637 containerd[1543]: time="2025-01-29T11:28:02.566457050Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:28:02.566637 containerd[1543]: time="2025-01-29T11:28:02.566516611Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:28:02.566637 containerd[1543]: time="2025-01-29T11:28:02.566531355Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:28:02.566637 containerd[1543]: time="2025-01-29T11:28:02.566543826Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:28:02.566637 containerd[1543]: time="2025-01-29T11:28:02.566554773Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.566851536Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567016245Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567090111Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567103045Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567113477Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567122996Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567132864Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567142458Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567152263Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567162637Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567173463Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567182463Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567190465Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:28:02.567347 containerd[1543]: time="2025-01-29T11:28:02.567204344Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567214663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567229242Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567239136Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567247785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567256762Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567267501Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567277632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567287081Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567297527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567309010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567319861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567331821Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567342293Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567357688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567651 containerd[1543]: time="2025-01-29T11:28:02.567370645Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567378782Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567413290Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567435159Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567444308Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567454445Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567461978Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567470597Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567477934Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:28:02.567918 containerd[1543]: time="2025-01-29T11:28:02.567486610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:28:02.568080 containerd[1543]: time="2025-01-29T11:28:02.567774260Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:28:02.568080 containerd[1543]: time="2025-01-29T11:28:02.567811989Z" level=info msg="Connect containerd service" Jan 29 11:28:02.568080 containerd[1543]: time="2025-01-29T11:28:02.567852790Z" level=info msg="using legacy CRI server" Jan 29 11:28:02.568080 containerd[1543]: time="2025-01-29T11:28:02.567858354Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:28:02.568080 containerd[1543]: time="2025-01-29T11:28:02.567939883Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:28:02.568358 containerd[1543]: time="2025-01-29T11:28:02.568320467Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:28:02.568474 containerd[1543]: time="2025-01-29T11:28:02.568439068Z" level=info msg="Start subscribing containerd event" Jan 29 11:28:02.568682 containerd[1543]: time="2025-01-29T11:28:02.568526973Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:28:02.568733 containerd[1543]: time="2025-01-29T11:28:02.568708133Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:28:02.568767 containerd[1543]: time="2025-01-29T11:28:02.568539922Z" level=info msg="Start recovering state" Jan 29 11:28:02.569448 containerd[1543]: time="2025-01-29T11:28:02.568795176Z" level=info msg="Start event monitor" Jan 29 11:28:02.569448 containerd[1543]: time="2025-01-29T11:28:02.568812083Z" level=info msg="Start snapshots syncer" Jan 29 11:28:02.569448 containerd[1543]: time="2025-01-29T11:28:02.568821921Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:28:02.569448 containerd[1543]: time="2025-01-29T11:28:02.568828550Z" level=info msg="Start streaming server" Jan 29 11:28:02.569448 containerd[1543]: time="2025-01-29T11:28:02.568869087Z" level=info msg="containerd successfully booted in 0.159494s" Jan 29 11:28:02.568936 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:28:04.577778 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:28:04.578176 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:28:04.580068 systemd[1]: Startup finished in 958ms (kernel) + 4.954s (initrd) + 5.816s (userspace) = 11.729s. Jan 29 11:28:04.590073 (kubelet)[1694]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:28:04.610374 login[1601]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:28:04.611668 login[1602]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jan 29 11:28:04.619014 systemd-logind[1523]: New session 2 of user core. Jan 29 11:28:04.620141 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:28:04.626782 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:28:04.630992 systemd-logind[1523]: New session 1 of user core. Jan 29 11:28:04.635029 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:28:04.638860 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:28:04.642667 (systemd)[1701]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:28:04.717392 systemd[1701]: Queued start job for default target default.target. Jan 29 11:28:04.723700 systemd[1701]: Created slice app.slice - User Application Slice. Jan 29 11:28:04.723721 systemd[1701]: Reached target paths.target - Paths. Jan 29 11:28:04.723730 systemd[1701]: Reached target timers.target - Timers. Jan 29 11:28:04.724478 systemd[1701]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:28:04.731813 systemd[1701]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:28:04.731848 systemd[1701]: Reached target sockets.target - Sockets. Jan 29 11:28:04.731858 systemd[1701]: Reached target basic.target - Basic System. Jan 29 11:28:04.731881 systemd[1701]: Reached target default.target - Main User Target. Jan 29 11:28:04.731898 systemd[1701]: Startup finished in 85ms. Jan 29 11:28:04.731992 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:28:04.733167 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:28:04.734264 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:28:05.317970 kubelet[1694]: E0129 11:28:05.317933 1694 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:28:05.319081 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:28:05.319233 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:28:15.569518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:28:15.578812 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:28:15.642870 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:28:15.645272 (kubelet)[1745]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:28:15.681812 kubelet[1745]: E0129 11:28:15.681769 1745 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:28:15.683825 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:28:15.683900 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:28:25.798770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 29 11:28:25.806966 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:28:25.866878 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:28:25.869374 (kubelet)[1761]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:28:25.904460 kubelet[1761]: E0129 11:28:25.904421 1761 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:28:25.906162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:28:25.906328 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:28:36.048704 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 29 11:28:36.056925 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:28:36.402196 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:28:36.405670 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:28:36.434823 kubelet[1776]: E0129 11:28:36.434784 1776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:28:36.435846 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:28:36.435924 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:28:41.734671 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:28:41.744885 systemd[1]: Started sshd@0-139.178.70.104:22-139.178.89.65:50996.service - OpenSSH per-connection server daemon (139.178.89.65:50996). Jan 29 11:28:41.784651 sshd[1783]: Accepted publickey for core from 139.178.89.65 port 50996 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:28:41.785913 sshd-session[1783]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:28:41.789226 systemd-logind[1523]: New session 3 of user core. Jan 29 11:28:41.799737 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:28:41.852802 systemd[1]: Started sshd@1-139.178.70.104:22-139.178.89.65:50998.service - OpenSSH per-connection server daemon (139.178.89.65:50998). Jan 29 11:28:41.890856 sshd[1788]: Accepted publickey for core from 139.178.89.65 port 50998 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:28:41.892254 sshd-session[1788]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:28:41.894641 systemd-logind[1523]: New session 4 of user core. Jan 29 11:28:41.900707 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:28:41.949586 sshd[1790]: Connection closed by 139.178.89.65 port 50998 Jan 29 11:28:41.950489 sshd-session[1788]: pam_unix(sshd:session): session closed for user core Jan 29 11:28:41.954290 systemd[1]: sshd@1-139.178.70.104:22-139.178.89.65:50998.service: Deactivated successfully. Jan 29 11:28:41.955325 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:28:41.956382 systemd-logind[1523]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:28:41.959839 systemd[1]: Started sshd@2-139.178.70.104:22-139.178.89.65:51000.service - OpenSSH per-connection server daemon (139.178.89.65:51000). Jan 29 11:28:41.961246 systemd-logind[1523]: Removed session 4. Jan 29 11:28:41.994970 sshd[1795]: Accepted publickey for core from 139.178.89.65 port 51000 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:28:41.995805 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:28:41.998645 systemd-logind[1523]: New session 5 of user core. Jan 29 11:28:42.008795 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:28:42.055324 sshd[1797]: Connection closed by 139.178.89.65 port 51000 Jan 29 11:28:42.055778 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 29 11:28:42.065244 systemd[1]: sshd@2-139.178.70.104:22-139.178.89.65:51000.service: Deactivated successfully. Jan 29 11:28:42.066192 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:28:42.066645 systemd-logind[1523]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:28:42.067907 systemd[1]: Started sshd@3-139.178.70.104:22-139.178.89.65:51008.service - OpenSSH per-connection server daemon (139.178.89.65:51008). Jan 29 11:28:42.069734 systemd-logind[1523]: Removed session 5. Jan 29 11:28:42.106730 sshd[1802]: Accepted publickey for core from 139.178.89.65 port 51008 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:28:42.107587 sshd-session[1802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:28:42.110944 systemd-logind[1523]: New session 6 of user core. Jan 29 11:28:42.119713 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:28:42.168809 sshd[1804]: Connection closed by 139.178.89.65 port 51008 Jan 29 11:28:42.169763 sshd-session[1802]: pam_unix(sshd:session): session closed for user core Jan 29 11:28:42.177359 systemd[1]: sshd@3-139.178.70.104:22-139.178.89.65:51008.service: Deactivated successfully. Jan 29 11:28:42.178763 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:28:42.179185 systemd-logind[1523]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:28:42.184864 systemd[1]: Started sshd@4-139.178.70.104:22-139.178.89.65:51022.service - OpenSSH per-connection server daemon (139.178.89.65:51022). Jan 29 11:28:42.185845 systemd-logind[1523]: Removed session 6. Jan 29 11:28:42.220710 sshd[1809]: Accepted publickey for core from 139.178.89.65 port 51022 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:28:42.221507 sshd-session[1809]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:28:42.224394 systemd-logind[1523]: New session 7 of user core. Jan 29 11:28:42.235754 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:28:42.293347 sudo[1812]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:28:42.293561 sudo[1812]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:28:42.303274 sudo[1812]: pam_unix(sudo:session): session closed for user root Jan 29 11:28:42.304905 sshd[1811]: Connection closed by 139.178.89.65 port 51022 Jan 29 11:28:42.304437 sshd-session[1809]: pam_unix(sshd:session): session closed for user core Jan 29 11:28:42.313558 systemd[1]: sshd@4-139.178.70.104:22-139.178.89.65:51022.service: Deactivated successfully. Jan 29 11:28:42.314530 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:28:42.315489 systemd-logind[1523]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:28:42.316454 systemd[1]: Started sshd@5-139.178.70.104:22-139.178.89.65:51026.service - OpenSSH per-connection server daemon (139.178.89.65:51026). Jan 29 11:28:42.317002 systemd-logind[1523]: Removed session 7. Jan 29 11:28:42.361535 sshd[1817]: Accepted publickey for core from 139.178.89.65 port 51026 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:28:42.362450 sshd-session[1817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:28:42.365257 systemd-logind[1523]: New session 8 of user core. Jan 29 11:28:42.372711 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:28:42.421579 sudo[1821]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:28:42.421803 sudo[1821]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:28:42.423987 sudo[1821]: pam_unix(sudo:session): session closed for user root Jan 29 11:28:42.427554 sudo[1820]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:28:42.427759 sudo[1820]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:28:42.437869 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:28:42.456871 augenrules[1843]: No rules Jan 29 11:28:42.457538 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:28:42.457703 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:28:42.458434 sudo[1820]: pam_unix(sudo:session): session closed for user root Jan 29 11:28:42.459795 sshd[1819]: Connection closed by 139.178.89.65 port 51026 Jan 29 11:28:42.459379 sshd-session[1817]: pam_unix(sshd:session): session closed for user core Jan 29 11:28:42.465133 systemd[1]: sshd@5-139.178.70.104:22-139.178.89.65:51026.service: Deactivated successfully. Jan 29 11:28:42.465847 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:28:42.466200 systemd-logind[1523]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:28:42.467153 systemd[1]: Started sshd@6-139.178.70.104:22-139.178.89.65:51040.service - OpenSSH per-connection server daemon (139.178.89.65:51040). Jan 29 11:28:42.468776 systemd-logind[1523]: Removed session 8. Jan 29 11:28:42.511717 sshd[1851]: Accepted publickey for core from 139.178.89.65 port 51040 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:28:42.512506 sshd-session[1851]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:28:42.516855 systemd-logind[1523]: New session 9 of user core. Jan 29 11:28:42.522764 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:28:42.573590 sudo[1854]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:28:42.573813 sudo[1854]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:28:42.841950 (dockerd)[1872]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 29 11:28:42.842108 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 29 11:28:43.088475 dockerd[1872]: time="2025-01-29T11:28:43.088441083Z" level=info msg="Starting up" Jan 29 11:28:43.142242 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport538196943-merged.mount: Deactivated successfully. Jan 29 11:28:43.159038 dockerd[1872]: time="2025-01-29T11:28:43.158933549Z" level=info msg="Loading containers: start." Jan 29 11:28:43.258643 kernel: Initializing XFRM netlink socket Jan 29 11:28:43.308218 systemd-networkd[1242]: docker0: Link UP Jan 29 11:28:43.324345 dockerd[1872]: time="2025-01-29T11:28:43.324323833Z" level=info msg="Loading containers: done." Jan 29 11:28:43.335093 dockerd[1872]: time="2025-01-29T11:28:43.334883312Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 29 11:28:43.335093 dockerd[1872]: time="2025-01-29T11:28:43.334930801Z" level=info msg="Docker daemon" commit=8b539b8df24032dabeaaa099cf1d0535ef0286a3 containerd-snapshotter=false storage-driver=overlay2 version=27.2.1 Jan 29 11:28:43.335093 dockerd[1872]: time="2025-01-29T11:28:43.334979598Z" level=info msg="Daemon has completed initialization" Jan 29 11:28:43.348989 dockerd[1872]: time="2025-01-29T11:28:43.348967524Z" level=info msg="API listen on /run/docker.sock" Jan 29 11:28:43.349184 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 29 11:28:43.837160 containerd[1543]: time="2025-01-29T11:28:43.837131073Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\"" Jan 29 11:28:44.379073 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1328689908.mount: Deactivated successfully. Jan 29 11:28:45.279482 containerd[1543]: time="2025-01-29T11:28:45.279456255Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:45.280544 containerd[1543]: time="2025-01-29T11:28:45.280488270Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.1: active requests=0, bytes read=28674824" Jan 29 11:28:45.281653 containerd[1543]: time="2025-01-29T11:28:45.280896689Z" level=info msg="ImageCreate event name:\"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:45.282413 containerd[1543]: time="2025-01-29T11:28:45.282398260Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:45.283201 containerd[1543]: time="2025-01-29T11:28:45.283184901Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.1\" with image id \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.1\", repo digest \"registry.k8s.io/kube-apiserver@sha256:b88ede8e7c3ce354ca0c45c448c48c094781ce692883ee56f181fa569338c0ac\", size \"28671624\" in 1.446029112s" Jan 29 11:28:45.283229 containerd[1543]: time="2025-01-29T11:28:45.283204907Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.1\" returns image reference \"sha256:95c0bda56fc4dd44cf1876f15c04427feabe5556394553874934ffd2514eeb0a\"" Jan 29 11:28:45.283888 containerd[1543]: time="2025-01-29T11:28:45.283762240Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\"" Jan 29 11:28:46.499353 containerd[1543]: time="2025-01-29T11:28:46.499172456Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:46.504498 containerd[1543]: time="2025-01-29T11:28:46.504468760Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.1: active requests=0, bytes read=24770711" Jan 29 11:28:46.511455 containerd[1543]: time="2025-01-29T11:28:46.511432858Z" level=info msg="ImageCreate event name:\"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:46.516444 containerd[1543]: time="2025-01-29T11:28:46.516420982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:46.517273 containerd[1543]: time="2025-01-29T11:28:46.516908032Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.1\" with image id \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.1\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:7e86b2b274365bbc5f5d1e08f0d32d8bb04b8484ac6a92484c298dc695025954\", size \"26258470\" in 1.233129682s" Jan 29 11:28:46.517273 containerd[1543]: time="2025-01-29T11:28:46.516928175Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.1\" returns image reference \"sha256:019ee182b58e20da055b173dc0b598fbde321d4bf959e1c2a832908ed7642d35\"" Jan 29 11:28:46.517273 containerd[1543]: time="2025-01-29T11:28:46.517168960Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\"" Jan 29 11:28:46.524590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 29 11:28:46.529767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:28:47.246894 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 38 scanned by (udev-worker) (2129) Jan 29 11:28:47.246983 update_engine[1525]: I20250129 11:28:46.855198 1525 update_attempter.cc:509] Updating boot flags... Jan 29 11:28:47.260563 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:28:47.263389 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:28:47.291652 kubelet[2140]: E0129 11:28:47.291611 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:28:47.292882 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:28:47.292967 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:28:48.501603 containerd[1543]: time="2025-01-29T11:28:48.501559478Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:48.508472 containerd[1543]: time="2025-01-29T11:28:48.508432516Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.1: active requests=0, bytes read=19169759" Jan 29 11:28:48.515252 containerd[1543]: time="2025-01-29T11:28:48.515222926Z" level=info msg="ImageCreate event name:\"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:48.520039 containerd[1543]: time="2025-01-29T11:28:48.520009437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:48.520772 containerd[1543]: time="2025-01-29T11:28:48.520699336Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.1\" with image id \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.1\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b8fcbcd2afe44acf368b24b61813686f64be4d7fff224d305d78a05bac38f72e\", size \"20657536\" in 2.00351544s" Jan 29 11:28:48.520772 containerd[1543]: time="2025-01-29T11:28:48.520719082Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.1\" returns image reference \"sha256:2b0d6572d062c0f590b08c3113e5d9a61e381b3da7845a0289bdbf1faa1b23d1\"" Jan 29 11:28:48.521170 containerd[1543]: time="2025-01-29T11:28:48.521103504Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\"" Jan 29 11:28:49.583973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264973651.mount: Deactivated successfully. Jan 29 11:28:50.253094 containerd[1543]: time="2025-01-29T11:28:50.253065913Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:50.254276 containerd[1543]: time="2025-01-29T11:28:50.254202846Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.1: active requests=0, bytes read=30909466" Jan 29 11:28:50.255606 containerd[1543]: time="2025-01-29T11:28:50.254699461Z" level=info msg="ImageCreate event name:\"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:50.255995 containerd[1543]: time="2025-01-29T11:28:50.255976923Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:50.256627 containerd[1543]: time="2025-01-29T11:28:50.256600984Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.1\" with image id \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\", repo tag \"registry.k8s.io/kube-proxy:v1.32.1\", repo digest \"registry.k8s.io/kube-proxy@sha256:0244651801747edf2368222f93a7d17cba6e668a890db72532d6b67a7e06dca5\", size \"30908485\" in 1.735372816s" Jan 29 11:28:50.256707 containerd[1543]: time="2025-01-29T11:28:50.256695252Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.1\" returns image reference \"sha256:e29f9c7391fd92d96bc72026fc755b0f9589536e36ecd7102161f1ded087897a\"" Jan 29 11:28:50.257080 containerd[1543]: time="2025-01-29T11:28:50.257062055Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 29 11:28:50.980034 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2256231941.mount: Deactivated successfully. Jan 29 11:28:51.701708 containerd[1543]: time="2025-01-29T11:28:51.701670649Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:51.703713 containerd[1543]: time="2025-01-29T11:28:51.703663536Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Jan 29 11:28:51.707793 containerd[1543]: time="2025-01-29T11:28:51.707751157Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:51.719071 containerd[1543]: time="2025-01-29T11:28:51.719014403Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:51.719596 containerd[1543]: time="2025-01-29T11:28:51.719573410Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.462491109s" Jan 29 11:28:51.719664 containerd[1543]: time="2025-01-29T11:28:51.719596545Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Jan 29 11:28:51.719944 containerd[1543]: time="2025-01-29T11:28:51.719928513Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 29 11:28:52.279741 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount106478937.mount: Deactivated successfully. Jan 29 11:28:52.341823 containerd[1543]: time="2025-01-29T11:28:52.341784565Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:52.349080 containerd[1543]: time="2025-01-29T11:28:52.348953332Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Jan 29 11:28:52.351745 containerd[1543]: time="2025-01-29T11:28:52.351698173Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:52.353302 containerd[1543]: time="2025-01-29T11:28:52.353270117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:52.354046 containerd[1543]: time="2025-01-29T11:28:52.353893652Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 633.424478ms" Jan 29 11:28:52.354046 containerd[1543]: time="2025-01-29T11:28:52.353936274Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Jan 29 11:28:52.354510 containerd[1543]: time="2025-01-29T11:28:52.354465789Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 29 11:28:53.128091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3883500478.mount: Deactivated successfully. Jan 29 11:28:54.549041 containerd[1543]: time="2025-01-29T11:28:54.548998118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:54.556035 containerd[1543]: time="2025-01-29T11:28:54.555990505Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551320" Jan 29 11:28:54.566371 containerd[1543]: time="2025-01-29T11:28:54.566339973Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:54.577157 containerd[1543]: time="2025-01-29T11:28:54.577127405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:28:54.577918 containerd[1543]: time="2025-01-29T11:28:54.577789234Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.223280073s" Jan 29 11:28:54.577918 containerd[1543]: time="2025-01-29T11:28:54.577817485Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Jan 29 11:28:55.771223 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:28:55.780886 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:28:55.795352 systemd[1]: Reloading requested from client PID 2297 ('systemctl') (unit session-9.scope)... Jan 29 11:28:55.795360 systemd[1]: Reloading... Jan 29 11:28:55.852690 zram_generator::config[2334]: No configuration found. Jan 29 11:28:55.917806 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:28:55.932895 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:28:55.976066 systemd[1]: Reloading finished in 180 ms. Jan 29 11:28:56.006421 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:28:56.006464 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:28:56.006625 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:28:56.011776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:28:56.341350 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:28:56.344355 (kubelet)[2402]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:28:56.379464 kubelet[2402]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:28:56.379464 kubelet[2402]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:28:56.379464 kubelet[2402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:28:56.379723 kubelet[2402]: I0129 11:28:56.379500 2402 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:28:56.648708 kubelet[2402]: I0129 11:28:56.648003 2402 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:28:56.648708 kubelet[2402]: I0129 11:28:56.648019 2402 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:28:56.648708 kubelet[2402]: I0129 11:28:56.648170 2402 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:28:56.674739 kubelet[2402]: E0129 11:28:56.674716 2402 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:56.675820 kubelet[2402]: I0129 11:28:56.675810 2402 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:28:56.687566 kubelet[2402]: E0129 11:28:56.687536 2402 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:28:56.687566 kubelet[2402]: I0129 11:28:56.687563 2402 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:28:56.692910 kubelet[2402]: I0129 11:28:56.692858 2402 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:28:56.696281 kubelet[2402]: I0129 11:28:56.696258 2402 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:28:56.696389 kubelet[2402]: I0129 11:28:56.696281 2402 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:28:56.697980 kubelet[2402]: I0129 11:28:56.697967 2402 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:28:56.697980 kubelet[2402]: I0129 11:28:56.697979 2402 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:28:56.698066 kubelet[2402]: I0129 11:28:56.698055 2402 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:28:56.700968 kubelet[2402]: I0129 11:28:56.700957 2402 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:28:56.700997 kubelet[2402]: I0129 11:28:56.700970 2402 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:28:56.700997 kubelet[2402]: I0129 11:28:56.700987 2402 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:28:56.700997 kubelet[2402]: I0129 11:28:56.700994 2402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:28:56.706467 kubelet[2402]: W0129 11:28:56.706380 2402 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 29 11:28:56.706467 kubelet[2402]: E0129 11:28:56.706412 2402 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:56.706597 kubelet[2402]: W0129 11:28:56.706573 2402 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 29 11:28:56.706647 kubelet[2402]: E0129 11:28:56.706600 2402 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:56.707778 kubelet[2402]: I0129 11:28:56.707690 2402 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:28:56.709970 kubelet[2402]: I0129 11:28:56.709923 2402 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:28:56.711801 kubelet[2402]: W0129 11:28:56.711793 2402 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:28:56.713078 kubelet[2402]: I0129 11:28:56.712973 2402 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:28:56.713078 kubelet[2402]: I0129 11:28:56.712995 2402 server.go:1287] "Started kubelet" Jan 29 11:28:56.715408 kubelet[2402]: I0129 11:28:56.715017 2402 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:28:56.715408 kubelet[2402]: I0129 11:28:56.715372 2402 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:28:56.715562 kubelet[2402]: I0129 11:28:56.715549 2402 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:28:56.716744 kubelet[2402]: I0129 11:28:56.716610 2402 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:28:56.717804 kubelet[2402]: I0129 11:28:56.717791 2402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:28:56.723488 kubelet[2402]: I0129 11:28:56.722973 2402 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:28:56.723488 kubelet[2402]: E0129 11:28:56.719556 2402 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://139.178.70.104:6443/api/v1/namespaces/default/events\": dial tcp 139.178.70.104:6443: connect: connection refused" event="&Event{ObjectMeta:{localhost.181f26571912ea92 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:localhost,UID:localhost,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:localhost,},FirstTimestamp:2025-01-29 11:28:56.712981138 +0000 UTC m=+0.366465032,LastTimestamp:2025-01-29 11:28:56.712981138 +0000 UTC m=+0.366465032,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:localhost,}" Jan 29 11:28:56.724460 kubelet[2402]: E0129 11:28:56.723645 2402 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"localhost\" not found" Jan 29 11:28:56.724460 kubelet[2402]: I0129 11:28:56.723668 2402 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:28:56.724460 kubelet[2402]: I0129 11:28:56.723762 2402 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:28:56.724460 kubelet[2402]: I0129 11:28:56.723783 2402 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:28:56.724460 kubelet[2402]: W0129 11:28:56.723958 2402 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 29 11:28:56.724460 kubelet[2402]: E0129 11:28:56.723981 2402 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:56.724460 kubelet[2402]: E0129 11:28:56.724107 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="200ms" Jan 29 11:28:56.732594 kubelet[2402]: I0129 11:28:56.732579 2402 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:28:56.732671 kubelet[2402]: I0129 11:28:56.732664 2402 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:28:56.732769 kubelet[2402]: I0129 11:28:56.732760 2402 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:28:56.733988 kubelet[2402]: I0129 11:28:56.733971 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:28:56.735650 kubelet[2402]: I0129 11:28:56.735641 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:28:56.735702 kubelet[2402]: I0129 11:28:56.735697 2402 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:28:56.735761 kubelet[2402]: I0129 11:28:56.735755 2402 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:28:56.735792 kubelet[2402]: I0129 11:28:56.735788 2402 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:28:56.735855 kubelet[2402]: E0129 11:28:56.735845 2402 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:28:56.743455 kubelet[2402]: W0129 11:28:56.743430 2402 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 29 11:28:56.743541 kubelet[2402]: E0129 11:28:56.743531 2402 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:56.744417 kubelet[2402]: I0129 11:28:56.744406 2402 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:28:56.744474 kubelet[2402]: I0129 11:28:56.744467 2402 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:28:56.744511 kubelet[2402]: I0129 11:28:56.744506 2402 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:28:56.748414 kubelet[2402]: I0129 11:28:56.748407 2402 policy_none.go:49] "None policy: Start" Jan 29 11:28:56.748459 kubelet[2402]: I0129 11:28:56.748454 2402 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:28:56.748493 kubelet[2402]: I0129 11:28:56.748489 2402 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:28:56.754987 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:28:56.766757 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:28:56.769682 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:28:56.777373 kubelet[2402]: I0129 11:28:56.777245 2402 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:28:56.777432 kubelet[2402]: I0129 11:28:56.777394 2402 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:28:56.777432 kubelet[2402]: I0129 11:28:56.777409 2402 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:28:56.777631 kubelet[2402]: I0129 11:28:56.777604 2402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:28:56.778577 kubelet[2402]: E0129 11:28:56.778565 2402 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:28:56.778744 kubelet[2402]: E0129 11:28:56.778729 2402 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"localhost\" not found" Jan 29 11:28:56.842759 systemd[1]: Created slice kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice - libcontainer container kubepods-burstable-pode9ba8773e418c2bbf5a955ad3b2b2e16.slice. Jan 29 11:28:56.852249 kubelet[2402]: E0129 11:28:56.852221 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:28:56.853850 systemd[1]: Created slice kubepods-burstable-pod713793a66d380f8ce14333da174c287f.slice - libcontainer container kubepods-burstable-pod713793a66d380f8ce14333da174c287f.slice. Jan 29 11:28:56.861588 kubelet[2402]: E0129 11:28:56.861484 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:28:56.864107 systemd[1]: Created slice kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice - libcontainer container kubepods-burstable-podeb981ecac1bbdbbdd50082f31745642c.slice. Jan 29 11:28:56.865500 kubelet[2402]: E0129 11:28:56.865418 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:28:56.878338 kubelet[2402]: I0129 11:28:56.878133 2402 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:28:56.878397 kubelet[2402]: E0129 11:28:56.878381 2402 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 29 11:28:56.924858 kubelet[2402]: E0129 11:28:56.924776 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="400ms" Jan 29 11:28:57.025585 kubelet[2402]: I0129 11:28:57.025536 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:28:57.025585 kubelet[2402]: I0129 11:28:57.025573 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/713793a66d380f8ce14333da174c287f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"713793a66d380f8ce14333da174c287f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:28:57.025717 kubelet[2402]: I0129 11:28:57.025600 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/713793a66d380f8ce14333da174c287f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"713793a66d380f8ce14333da174c287f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:28:57.025717 kubelet[2402]: I0129 11:28:57.025630 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:28:57.025717 kubelet[2402]: I0129 11:28:57.025642 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:28:57.025717 kubelet[2402]: I0129 11:28:57.025652 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:28:57.025717 kubelet[2402]: I0129 11:28:57.025660 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/713793a66d380f8ce14333da174c287f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"713793a66d380f8ce14333da174c287f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:28:57.025810 kubelet[2402]: I0129 11:28:57.025669 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:28:57.025810 kubelet[2402]: I0129 11:28:57.025676 2402 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:28:57.079500 kubelet[2402]: I0129 11:28:57.079447 2402 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:28:57.079677 kubelet[2402]: E0129 11:28:57.079658 2402 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 29 11:28:57.153634 containerd[1543]: time="2025-01-29T11:28:57.153591712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,}" Jan 29 11:28:57.164575 containerd[1543]: time="2025-01-29T11:28:57.164505580Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:713793a66d380f8ce14333da174c287f,Namespace:kube-system,Attempt:0,}" Jan 29 11:28:57.166501 containerd[1543]: time="2025-01-29T11:28:57.166421727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,}" Jan 29 11:28:57.325882 kubelet[2402]: E0129 11:28:57.325851 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="800ms" Jan 29 11:28:57.481493 kubelet[2402]: I0129 11:28:57.481461 2402 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:28:57.481793 kubelet[2402]: E0129 11:28:57.481695 2402 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 29 11:28:57.649925 kubelet[2402]: W0129 11:28:57.649867 2402 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 29 11:28:57.649925 kubelet[2402]: E0129 11:28:57.649897 2402 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://139.178.70.104:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:57.727151 kubelet[2402]: W0129 11:28:57.727112 2402 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 29 11:28:57.727151 kubelet[2402]: E0129 11:28:57.727153 2402 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://139.178.70.104:6443/api/v1/nodes?fieldSelector=metadata.name%3Dlocalhost&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:57.742664 kubelet[2402]: W0129 11:28:57.742633 2402 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 29 11:28:57.742713 kubelet[2402]: E0129 11:28:57.742667 2402 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://139.178.70.104:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:57.798973 kubelet[2402]: W0129 11:28:57.798900 2402 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 139.178.70.104:6443: connect: connection refused Jan 29 11:28:57.798973 kubelet[2402]: E0129 11:28:57.798944 2402 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://139.178.70.104:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:57.806320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount195772361.mount: Deactivated successfully. Jan 29 11:28:57.810402 containerd[1543]: time="2025-01-29T11:28:57.810320836Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:28:57.811183 containerd[1543]: time="2025-01-29T11:28:57.810865397Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:28:57.811626 containerd[1543]: time="2025-01-29T11:28:57.811580009Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:28:57.812270 containerd[1543]: time="2025-01-29T11:28:57.812189197Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:28:57.812366 containerd[1543]: time="2025-01-29T11:28:57.812349239Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:28:57.812391 containerd[1543]: time="2025-01-29T11:28:57.812369285Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:28:57.813012 containerd[1543]: time="2025-01-29T11:28:57.812554319Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:28:57.814920 containerd[1543]: time="2025-01-29T11:28:57.814902054Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:28:57.815584 containerd[1543]: time="2025-01-29T11:28:57.815437551Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 648.972442ms" Jan 29 11:28:57.816361 containerd[1543]: time="2025-01-29T11:28:57.816342532Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 662.662427ms" Jan 29 11:28:57.818701 containerd[1543]: time="2025-01-29T11:28:57.818629669Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 654.067814ms" Jan 29 11:28:57.912988 containerd[1543]: time="2025-01-29T11:28:57.912324600Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:28:57.912988 containerd[1543]: time="2025-01-29T11:28:57.912558622Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:28:57.912988 containerd[1543]: time="2025-01-29T11:28:57.912571103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:28:57.913331 containerd[1543]: time="2025-01-29T11:28:57.910835257Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:28:57.913331 containerd[1543]: time="2025-01-29T11:28:57.913160314Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:28:57.913331 containerd[1543]: time="2025-01-29T11:28:57.913173118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:28:57.913331 containerd[1543]: time="2025-01-29T11:28:57.913238414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:28:57.914182 containerd[1543]: time="2025-01-29T11:28:57.913852740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:28:57.923164 containerd[1543]: time="2025-01-29T11:28:57.920542199Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:28:57.923164 containerd[1543]: time="2025-01-29T11:28:57.923077739Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:28:57.923164 containerd[1543]: time="2025-01-29T11:28:57.923096700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:28:57.923924 containerd[1543]: time="2025-01-29T11:28:57.923880473Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:28:57.931763 systemd[1]: Started cri-containerd-b0bcd01e4dc036cac989a9016510f015a0cce1eb21bf18507bd421596935d2c0.scope - libcontainer container b0bcd01e4dc036cac989a9016510f015a0cce1eb21bf18507bd421596935d2c0. Jan 29 11:28:57.950738 systemd[1]: Started cri-containerd-4d171f403f0f7ebbdc0993292aad52a295a02b5f4ef68ccd78267bfb3ac9fdcc.scope - libcontainer container 4d171f403f0f7ebbdc0993292aad52a295a02b5f4ef68ccd78267bfb3ac9fdcc. Jan 29 11:28:57.951706 systemd[1]: Started cri-containerd-8a83265802a58025c8d60cb47aa9bbc2b89b45afd04cd68ee124e955924451ea.scope - libcontainer container 8a83265802a58025c8d60cb47aa9bbc2b89b45afd04cd68ee124e955924451ea. Jan 29 11:28:57.983342 containerd[1543]: time="2025-01-29T11:28:57.983318236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-localhost,Uid:713793a66d380f8ce14333da174c287f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b0bcd01e4dc036cac989a9016510f015a0cce1eb21bf18507bd421596935d2c0\"" Jan 29 11:28:57.987812 containerd[1543]: time="2025-01-29T11:28:57.987776529Z" level=info msg="CreateContainer within sandbox \"b0bcd01e4dc036cac989a9016510f015a0cce1eb21bf18507bd421596935d2c0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 29 11:28:58.002590 containerd[1543]: time="2025-01-29T11:28:58.002421831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-localhost,Uid:eb981ecac1bbdbbdd50082f31745642c,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d171f403f0f7ebbdc0993292aad52a295a02b5f4ef68ccd78267bfb3ac9fdcc\"" Jan 29 11:28:58.006063 containerd[1543]: time="2025-01-29T11:28:58.005696975Z" level=info msg="CreateContainer within sandbox \"4d171f403f0f7ebbdc0993292aad52a295a02b5f4ef68ccd78267bfb3ac9fdcc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 29 11:28:58.047652 containerd[1543]: time="2025-01-29T11:28:58.039986395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-localhost,Uid:e9ba8773e418c2bbf5a955ad3b2b2e16,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a83265802a58025c8d60cb47aa9bbc2b89b45afd04cd68ee124e955924451ea\"" Jan 29 11:28:58.060050 containerd[1543]: time="2025-01-29T11:28:58.048689935Z" level=info msg="CreateContainer within sandbox \"8a83265802a58025c8d60cb47aa9bbc2b89b45afd04cd68ee124e955924451ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 29 11:28:58.126334 kubelet[2402]: E0129 11:28:58.126305 2402 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://139.178.70.104:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/localhost?timeout=10s\": dial tcp 139.178.70.104:6443: connect: connection refused" interval="1.6s" Jan 29 11:28:58.291865 kubelet[2402]: I0129 11:28:58.283578 2402 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:28:58.291865 kubelet[2402]: E0129 11:28:58.283862 2402 kubelet_node_status.go:108] "Unable to register node with API server" err="Post \"https://139.178.70.104:6443/api/v1/nodes\": dial tcp 139.178.70.104:6443: connect: connection refused" node="localhost" Jan 29 11:28:58.787008 containerd[1543]: time="2025-01-29T11:28:58.786889489Z" level=info msg="CreateContainer within sandbox \"b0bcd01e4dc036cac989a9016510f015a0cce1eb21bf18507bd421596935d2c0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5b79d4de96e929793a7b8b3934f7988b917bf93b8d3c9bcde3da0dc8875ae987\"" Jan 29 11:28:58.787606 containerd[1543]: time="2025-01-29T11:28:58.787584477Z" level=info msg="StartContainer for \"5b79d4de96e929793a7b8b3934f7988b917bf93b8d3c9bcde3da0dc8875ae987\"" Jan 29 11:28:58.803014 kubelet[2402]: E0129 11:28:58.802831 2402 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://139.178.70.104:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 139.178.70.104:6443: connect: connection refused" logger="UnhandledError" Jan 29 11:28:58.806142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1829942681.mount: Deactivated successfully. Jan 29 11:28:58.812542 containerd[1543]: time="2025-01-29T11:28:58.812440726Z" level=info msg="CreateContainer within sandbox \"8a83265802a58025c8d60cb47aa9bbc2b89b45afd04cd68ee124e955924451ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"3e335de6dbeebd2f59b69e495481290f32584c9494fe61a9fff001fc00e2701e\"" Jan 29 11:28:58.813668 containerd[1543]: time="2025-01-29T11:28:58.812845173Z" level=info msg="StartContainer for \"3e335de6dbeebd2f59b69e495481290f32584c9494fe61a9fff001fc00e2701e\"" Jan 29 11:28:58.822217 systemd[1]: Started cri-containerd-5b79d4de96e929793a7b8b3934f7988b917bf93b8d3c9bcde3da0dc8875ae987.scope - libcontainer container 5b79d4de96e929793a7b8b3934f7988b917bf93b8d3c9bcde3da0dc8875ae987. Jan 29 11:28:58.834498 systemd[1]: Started cri-containerd-3e335de6dbeebd2f59b69e495481290f32584c9494fe61a9fff001fc00e2701e.scope - libcontainer container 3e335de6dbeebd2f59b69e495481290f32584c9494fe61a9fff001fc00e2701e. Jan 29 11:28:58.836341 containerd[1543]: time="2025-01-29T11:28:58.836253468Z" level=info msg="CreateContainer within sandbox \"4d171f403f0f7ebbdc0993292aad52a295a02b5f4ef68ccd78267bfb3ac9fdcc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3406f8995c23f2387155903fb2d5eaa508b4b1650d8f8ca38ea7dcddfdeaca72\"" Jan 29 11:28:58.837843 containerd[1543]: time="2025-01-29T11:28:58.837117622Z" level=info msg="StartContainer for \"3406f8995c23f2387155903fb2d5eaa508b4b1650d8f8ca38ea7dcddfdeaca72\"" Jan 29 11:28:58.858716 systemd[1]: Started cri-containerd-3406f8995c23f2387155903fb2d5eaa508b4b1650d8f8ca38ea7dcddfdeaca72.scope - libcontainer container 3406f8995c23f2387155903fb2d5eaa508b4b1650d8f8ca38ea7dcddfdeaca72. Jan 29 11:28:58.873900 containerd[1543]: time="2025-01-29T11:28:58.873872572Z" level=info msg="StartContainer for \"5b79d4de96e929793a7b8b3934f7988b917bf93b8d3c9bcde3da0dc8875ae987\" returns successfully" Jan 29 11:28:58.889799 containerd[1543]: time="2025-01-29T11:28:58.889771108Z" level=info msg="StartContainer for \"3e335de6dbeebd2f59b69e495481290f32584c9494fe61a9fff001fc00e2701e\" returns successfully" Jan 29 11:28:58.900545 containerd[1543]: time="2025-01-29T11:28:58.900489980Z" level=info msg="StartContainer for \"3406f8995c23f2387155903fb2d5eaa508b4b1650d8f8ca38ea7dcddfdeaca72\" returns successfully" Jan 29 11:28:59.752372 kubelet[2402]: E0129 11:28:59.752350 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:28:59.753550 kubelet[2402]: E0129 11:28:59.753537 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:28:59.758729 kubelet[2402]: E0129 11:28:59.758579 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:28:59.885140 kubelet[2402]: I0129 11:28:59.884770 2402 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:29:00.759768 kubelet[2402]: E0129 11:29:00.759375 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:29:00.759768 kubelet[2402]: E0129 11:29:00.759543 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:29:00.759768 kubelet[2402]: E0129 11:29:00.759680 2402 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"localhost\" not found" node="localhost" Jan 29 11:29:00.808740 kubelet[2402]: E0129 11:29:00.808717 2402 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"localhost\" not found" node="localhost" Jan 29 11:29:00.892495 kubelet[2402]: I0129 11:29:00.892462 2402 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:29:00.892495 kubelet[2402]: E0129 11:29:00.892484 2402 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"localhost\": node \"localhost\" not found" Jan 29 11:29:00.924605 kubelet[2402]: I0129 11:29:00.924577 2402 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:29:00.943194 kubelet[2402]: E0129 11:29:00.943169 2402 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-localhost" Jan 29 11:29:00.943194 kubelet[2402]: I0129 11:29:00.943188 2402 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:00.947747 kubelet[2402]: E0129 11:29:00.947606 2402 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:00.947747 kubelet[2402]: I0129 11:29:00.947630 2402 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:29:00.950663 kubelet[2402]: E0129 11:29:00.950642 2402 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-localhost\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:29:01.706657 kubelet[2402]: I0129 11:29:01.706286 2402 apiserver.go:52] "Watching apiserver" Jan 29 11:29:01.723966 kubelet[2402]: I0129 11:29:01.723944 2402 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:29:01.760351 kubelet[2402]: I0129 11:29:01.760235 2402 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:01.760351 kubelet[2402]: I0129 11:29:01.760298 2402 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:29:02.383029 systemd[1]: Reloading requested from client PID 2675 ('systemctl') (unit session-9.scope)... Jan 29 11:29:02.383038 systemd[1]: Reloading... Jan 29 11:29:02.442633 zram_generator::config[2716]: No configuration found. Jan 29 11:29:02.506348 systemd[1]: /etc/systemd/system/coreos-metadata.service:11: Ignoring unknown escape sequences: "echo "COREOS_CUSTOM_PRIVATE_IPV4=$(ip addr show ens192 | grep "inet 10." | grep -Po "inet \K[\d.]+") Jan 29 11:29:02.521307 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:29:02.572631 systemd[1]: Reloading finished in 189 ms. Jan 29 11:29:02.593187 kubelet[2402]: I0129 11:29:02.593151 2402 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:29:02.593298 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:29:02.600160 systemd[1]: kubelet.service: Deactivated successfully. Jan 29 11:29:02.600300 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:29:02.603813 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:29:03.200237 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:29:03.209301 (kubelet)[2780]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:29:03.248410 kubelet[2780]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:29:03.248410 kubelet[2780]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 29 11:29:03.248410 kubelet[2780]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:29:03.248647 kubelet[2780]: I0129 11:29:03.248438 2780 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:29:03.252591 kubelet[2780]: I0129 11:29:03.252377 2780 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Jan 29 11:29:03.252591 kubelet[2780]: I0129 11:29:03.252389 2780 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:29:03.252675 kubelet[2780]: I0129 11:29:03.252636 2780 server.go:954] "Client rotation is on, will bootstrap in background" Jan 29 11:29:03.254283 kubelet[2780]: I0129 11:29:03.254203 2780 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 29 11:29:03.258248 kubelet[2780]: I0129 11:29:03.258237 2780 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:29:03.260719 kubelet[2780]: E0129 11:29:03.260659 2780 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:29:03.260719 kubelet[2780]: I0129 11:29:03.260679 2780 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:29:03.262400 kubelet[2780]: I0129 11:29:03.262392 2780 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:29:03.262584 kubelet[2780]: I0129 11:29:03.262569 2780 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:29:03.262794 kubelet[2780]: I0129 11:29:03.262645 2780 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"localhost","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:29:03.262794 kubelet[2780]: I0129 11:29:03.262748 2780 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:29:03.262794 kubelet[2780]: I0129 11:29:03.262754 2780 container_manager_linux.go:304] "Creating device plugin manager" Jan 29 11:29:03.262794 kubelet[2780]: I0129 11:29:03.262777 2780 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:29:03.263071 kubelet[2780]: I0129 11:29:03.263019 2780 kubelet.go:446] "Attempting to sync node with API server" Jan 29 11:29:03.263071 kubelet[2780]: I0129 11:29:03.263030 2780 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:29:03.263071 kubelet[2780]: I0129 11:29:03.263040 2780 kubelet.go:352] "Adding apiserver pod source" Jan 29 11:29:03.263071 kubelet[2780]: I0129 11:29:03.263046 2780 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:29:03.265651 kubelet[2780]: I0129 11:29:03.264760 2780 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:29:03.265651 kubelet[2780]: I0129 11:29:03.264994 2780 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:29:03.282800 kubelet[2780]: I0129 11:29:03.282780 2780 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 29 11:29:03.283622 kubelet[2780]: I0129 11:29:03.282901 2780 server.go:1287] "Started kubelet" Jan 29 11:29:03.283622 kubelet[2780]: I0129 11:29:03.282962 2780 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:29:03.283622 kubelet[2780]: I0129 11:29:03.283544 2780 server.go:490] "Adding debug handlers to kubelet server" Jan 29 11:29:03.283774 kubelet[2780]: I0129 11:29:03.283745 2780 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:29:03.283924 kubelet[2780]: I0129 11:29:03.283912 2780 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:29:03.284908 kubelet[2780]: I0129 11:29:03.284662 2780 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:29:03.285137 kubelet[2780]: I0129 11:29:03.285113 2780 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:29:03.287506 kubelet[2780]: I0129 11:29:03.287494 2780 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 29 11:29:03.287561 kubelet[2780]: I0129 11:29:03.287549 2780 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jan 29 11:29:03.287634 kubelet[2780]: I0129 11:29:03.287610 2780 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:29:03.288452 kubelet[2780]: I0129 11:29:03.288440 2780 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:29:03.288508 kubelet[2780]: I0129 11:29:03.288496 2780 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:29:03.289790 kubelet[2780]: I0129 11:29:03.289739 2780 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:29:03.303402 kubelet[2780]: E0129 11:29:03.302931 2780 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:29:03.334802 kubelet[2780]: I0129 11:29:03.334707 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:29:03.336142 kubelet[2780]: I0129 11:29:03.335781 2780 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:29:03.336142 kubelet[2780]: I0129 11:29:03.335802 2780 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 29 11:29:03.336142 kubelet[2780]: I0129 11:29:03.336006 2780 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 29 11:29:03.336142 kubelet[2780]: I0129 11:29:03.336015 2780 kubelet.go:2388] "Starting kubelet main sync loop" Jan 29 11:29:03.343640 kubelet[2780]: E0129 11:29:03.343603 2780 kubelet.go:2412] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 29 11:29:03.347057 kubelet[2780]: I0129 11:29:03.347038 2780 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 29 11:29:03.347057 kubelet[2780]: I0129 11:29:03.347049 2780 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 29 11:29:03.347057 kubelet[2780]: I0129 11:29:03.347061 2780 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:29:03.347211 kubelet[2780]: I0129 11:29:03.347198 2780 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 29 11:29:03.347239 kubelet[2780]: I0129 11:29:03.347209 2780 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 29 11:29:03.347239 kubelet[2780]: I0129 11:29:03.347223 2780 policy_none.go:49] "None policy: Start" Jan 29 11:29:03.347239 kubelet[2780]: I0129 11:29:03.347232 2780 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 29 11:29:03.347291 kubelet[2780]: I0129 11:29:03.347239 2780 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:29:03.347708 kubelet[2780]: I0129 11:29:03.347314 2780 state_mem.go:75] "Updated machine memory state" Jan 29 11:29:03.353124 kubelet[2780]: I0129 11:29:03.353110 2780 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:29:03.353568 kubelet[2780]: I0129 11:29:03.353560 2780 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:29:03.353638 kubelet[2780]: I0129 11:29:03.353610 2780 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:29:03.354817 kubelet[2780]: I0129 11:29:03.354805 2780 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:29:03.355648 kubelet[2780]: E0129 11:29:03.355608 2780 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 29 11:29:03.445650 kubelet[2780]: I0129 11:29:03.445112 2780 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:03.449805 kubelet[2780]: I0129 11:29:03.449727 2780 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:29:03.449805 kubelet[2780]: I0129 11:29:03.449749 2780 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-localhost" Jan 29 11:29:03.458141 kubelet[2780]: I0129 11:29:03.457602 2780 kubelet_node_status.go:76] "Attempting to register node" node="localhost" Jan 29 11:29:03.472897 kubelet[2780]: E0129 11:29:03.472865 2780 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:29:03.473099 kubelet[2780]: E0129 11:29:03.473076 2780 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:03.484017 kubelet[2780]: I0129 11:29:03.483805 2780 kubelet_node_status.go:125] "Node was previously registered" node="localhost" Jan 29 11:29:03.484017 kubelet[2780]: I0129 11:29:03.483863 2780 kubelet_node_status.go:79] "Successfully registered node" node="localhost" Jan 29 11:29:03.589371 kubelet[2780]: I0129 11:29:03.589278 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/eb981ecac1bbdbbdd50082f31745642c-kubeconfig\") pod \"kube-scheduler-localhost\" (UID: \"eb981ecac1bbdbbdd50082f31745642c\") " pod="kube-system/kube-scheduler-localhost" Jan 29 11:29:03.589371 kubelet[2780]: I0129 11:29:03.589315 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/713793a66d380f8ce14333da174c287f-ca-certs\") pod \"kube-apiserver-localhost\" (UID: \"713793a66d380f8ce14333da174c287f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:03.589371 kubelet[2780]: I0129 11:29:03.589329 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/713793a66d380f8ce14333da174c287f-usr-share-ca-certificates\") pod \"kube-apiserver-localhost\" (UID: \"713793a66d380f8ce14333da174c287f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:03.589371 kubelet[2780]: I0129 11:29:03.589341 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-ca-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:29:03.589371 kubelet[2780]: I0129 11:29:03.589350 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-flexvolume-dir\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:29:03.589695 kubelet[2780]: I0129 11:29:03.589634 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-k8s-certs\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:29:03.589695 kubelet[2780]: I0129 11:29:03.589651 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/713793a66d380f8ce14333da174c287f-k8s-certs\") pod \"kube-apiserver-localhost\" (UID: \"713793a66d380f8ce14333da174c287f\") " pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:03.589695 kubelet[2780]: I0129 11:29:03.589662 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-kubeconfig\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:29:03.589695 kubelet[2780]: I0129 11:29:03.589680 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/e9ba8773e418c2bbf5a955ad3b2b2e16-usr-share-ca-certificates\") pod \"kube-controller-manager-localhost\" (UID: \"e9ba8773e418c2bbf5a955ad3b2b2e16\") " pod="kube-system/kube-controller-manager-localhost" Jan 29 11:29:04.263443 kubelet[2780]: I0129 11:29:04.263411 2780 apiserver.go:52] "Watching apiserver" Jan 29 11:29:04.288086 kubelet[2780]: I0129 11:29:04.288041 2780 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jan 29 11:29:04.355687 kubelet[2780]: I0129 11:29:04.355668 2780 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:04.355824 kubelet[2780]: I0129 11:29:04.355814 2780 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-localhost" Jan 29 11:29:04.380241 kubelet[2780]: E0129 11:29:04.380207 2780 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-localhost\" already exists" pod="kube-system/kube-apiserver-localhost" Jan 29 11:29:04.380386 kubelet[2780]: E0129 11:29:04.380374 2780 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-localhost\" already exists" pod="kube-system/kube-scheduler-localhost" Jan 29 11:29:04.433294 kubelet[2780]: I0129 11:29:04.433254 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-localhost" podStartSLOduration=3.43323332 podStartE2EDuration="3.43323332s" podCreationTimestamp="2025-01-29 11:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:29:04.427042006 +0000 UTC m=+1.213031296" watchObservedRunningTime="2025-01-29 11:29:04.43323332 +0000 UTC m=+1.219222610" Jan 29 11:29:04.438154 kubelet[2780]: I0129 11:29:04.438112 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-localhost" podStartSLOduration=1.438097404 podStartE2EDuration="1.438097404s" podCreationTimestamp="2025-01-29 11:29:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:29:04.433744381 +0000 UTC m=+1.219733672" watchObservedRunningTime="2025-01-29 11:29:04.438097404 +0000 UTC m=+1.224086700" Jan 29 11:29:04.444374 kubelet[2780]: I0129 11:29:04.444297 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-localhost" podStartSLOduration=3.444285132 podStartE2EDuration="3.444285132s" podCreationTimestamp="2025-01-29 11:29:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:29:04.438716333 +0000 UTC m=+1.224705624" watchObservedRunningTime="2025-01-29 11:29:04.444285132 +0000 UTC m=+1.230274422" Jan 29 11:29:06.879059 kubelet[2780]: I0129 11:29:06.879034 2780 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 29 11:29:06.879791 kubelet[2780]: I0129 11:29:06.879345 2780 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 29 11:29:06.879818 containerd[1543]: time="2025-01-29T11:29:06.879236479Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:29:07.388290 systemd[1]: Created slice kubepods-besteffort-pod7d9412c4_198c_4401_b1bc_609d572d88db.slice - libcontainer container kubepods-besteffort-pod7d9412c4_198c_4401_b1bc_609d572d88db.slice. Jan 29 11:29:07.416282 kubelet[2780]: I0129 11:29:07.416248 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d9412c4-198c-4401-b1bc-609d572d88db-lib-modules\") pod \"kube-proxy-x4qfk\" (UID: \"7d9412c4-198c-4401-b1bc-609d572d88db\") " pod="kube-system/kube-proxy-x4qfk" Jan 29 11:29:07.416282 kubelet[2780]: I0129 11:29:07.416277 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2zgtp\" (UniqueName: \"kubernetes.io/projected/7d9412c4-198c-4401-b1bc-609d572d88db-kube-api-access-2zgtp\") pod \"kube-proxy-x4qfk\" (UID: \"7d9412c4-198c-4401-b1bc-609d572d88db\") " pod="kube-system/kube-proxy-x4qfk" Jan 29 11:29:07.416443 kubelet[2780]: I0129 11:29:07.416299 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7d9412c4-198c-4401-b1bc-609d572d88db-kube-proxy\") pod \"kube-proxy-x4qfk\" (UID: \"7d9412c4-198c-4401-b1bc-609d572d88db\") " pod="kube-system/kube-proxy-x4qfk" Jan 29 11:29:07.416443 kubelet[2780]: I0129 11:29:07.416307 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d9412c4-198c-4401-b1bc-609d572d88db-xtables-lock\") pod \"kube-proxy-x4qfk\" (UID: \"7d9412c4-198c-4401-b1bc-609d572d88db\") " pod="kube-system/kube-proxy-x4qfk" Jan 29 11:29:07.447257 sudo[1854]: pam_unix(sudo:session): session closed for user root Jan 29 11:29:07.448451 sshd[1853]: Connection closed by 139.178.89.65 port 51040 Jan 29 11:29:07.449942 sshd-session[1851]: pam_unix(sshd:session): session closed for user core Jan 29 11:29:07.451692 systemd-logind[1523]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:29:07.453039 systemd[1]: sshd@6-139.178.70.104:22-139.178.89.65:51040.service: Deactivated successfully. Jan 29 11:29:07.454030 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:29:07.454196 systemd[1]: session-9.scope: Consumed 2.267s CPU time, 134.2M memory peak, 0B memory swap peak. Jan 29 11:29:07.454917 systemd-logind[1523]: Removed session 9. Jan 29 11:29:07.523415 kubelet[2780]: E0129 11:29:07.523382 2780 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Jan 29 11:29:07.523415 kubelet[2780]: E0129 11:29:07.523412 2780 projected.go:194] Error preparing data for projected volume kube-api-access-2zgtp for pod kube-system/kube-proxy-x4qfk: configmap "kube-root-ca.crt" not found Jan 29 11:29:07.523527 kubelet[2780]: E0129 11:29:07.523459 2780 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/7d9412c4-198c-4401-b1bc-609d572d88db-kube-api-access-2zgtp podName:7d9412c4-198c-4401-b1bc-609d572d88db nodeName:}" failed. No retries permitted until 2025-01-29 11:29:08.023445143 +0000 UTC m=+4.809434431 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2zgtp" (UniqueName: "kubernetes.io/projected/7d9412c4-198c-4401-b1bc-609d572d88db-kube-api-access-2zgtp") pod "kube-proxy-x4qfk" (UID: "7d9412c4-198c-4401-b1bc-609d572d88db") : configmap "kube-root-ca.crt" not found Jan 29 11:29:07.794767 systemd[1]: Created slice kubepods-besteffort-podbccb438c_fab9_48f7_a9fa_b9fe1e334ac8.slice - libcontainer container kubepods-besteffort-podbccb438c_fab9_48f7_a9fa_b9fe1e334ac8.slice. Jan 29 11:29:07.818783 kubelet[2780]: I0129 11:29:07.818755 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fj2j6\" (UniqueName: \"kubernetes.io/projected/bccb438c-fab9-48f7-a9fa-b9fe1e334ac8-kube-api-access-fj2j6\") pod \"tigera-operator-7d68577dc5-x5f5b\" (UID: \"bccb438c-fab9-48f7-a9fa-b9fe1e334ac8\") " pod="tigera-operator/tigera-operator-7d68577dc5-x5f5b" Jan 29 11:29:07.818924 kubelet[2780]: I0129 11:29:07.818916 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/bccb438c-fab9-48f7-a9fa-b9fe1e334ac8-var-lib-calico\") pod \"tigera-operator-7d68577dc5-x5f5b\" (UID: \"bccb438c-fab9-48f7-a9fa-b9fe1e334ac8\") " pod="tigera-operator/tigera-operator-7d68577dc5-x5f5b" Jan 29 11:29:08.101133 containerd[1543]: time="2025-01-29T11:29:08.101086321Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-x5f5b,Uid:bccb438c-fab9-48f7-a9fa-b9fe1e334ac8,Namespace:tigera-operator,Attempt:0,}" Jan 29 11:29:08.117402 containerd[1543]: time="2025-01-29T11:29:08.117260130Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:08.117402 containerd[1543]: time="2025-01-29T11:29:08.117299644Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:08.117402 containerd[1543]: time="2025-01-29T11:29:08.117313412Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:08.117402 containerd[1543]: time="2025-01-29T11:29:08.117365536Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:08.142738 systemd[1]: Started cri-containerd-4c7e03f15c401e8518e76df6bd84397ea3449d1db3d4c6ab3a623e81d4312deb.scope - libcontainer container 4c7e03f15c401e8518e76df6bd84397ea3449d1db3d4c6ab3a623e81d4312deb. Jan 29 11:29:08.168510 containerd[1543]: time="2025-01-29T11:29:08.168441039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7d68577dc5-x5f5b,Uid:bccb438c-fab9-48f7-a9fa-b9fe1e334ac8,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"4c7e03f15c401e8518e76df6bd84397ea3449d1db3d4c6ab3a623e81d4312deb\"" Jan 29 11:29:08.170226 containerd[1543]: time="2025-01-29T11:29:08.170144151Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\"" Jan 29 11:29:08.294868 containerd[1543]: time="2025-01-29T11:29:08.294798414Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4qfk,Uid:7d9412c4-198c-4401-b1bc-609d572d88db,Namespace:kube-system,Attempt:0,}" Jan 29 11:29:08.324799 containerd[1543]: time="2025-01-29T11:29:08.324701394Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:08.324799 containerd[1543]: time="2025-01-29T11:29:08.324750389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:08.324799 containerd[1543]: time="2025-01-29T11:29:08.324760521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:08.325145 containerd[1543]: time="2025-01-29T11:29:08.324833273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:08.337852 systemd[1]: Started cri-containerd-fc3d4f4b98ac711988397fc65b4c477a678ce107e7889d207f5c84ab781a6e5d.scope - libcontainer container fc3d4f4b98ac711988397fc65b4c477a678ce107e7889d207f5c84ab781a6e5d. Jan 29 11:29:08.357548 containerd[1543]: time="2025-01-29T11:29:08.356978453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x4qfk,Uid:7d9412c4-198c-4401-b1bc-609d572d88db,Namespace:kube-system,Attempt:0,} returns sandbox id \"fc3d4f4b98ac711988397fc65b4c477a678ce107e7889d207f5c84ab781a6e5d\"" Jan 29 11:29:08.360101 containerd[1543]: time="2025-01-29T11:29:08.360070097Z" level=info msg="CreateContainer within sandbox \"fc3d4f4b98ac711988397fc65b4c477a678ce107e7889d207f5c84ab781a6e5d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:29:08.368350 containerd[1543]: time="2025-01-29T11:29:08.368319704Z" level=info msg="CreateContainer within sandbox \"fc3d4f4b98ac711988397fc65b4c477a678ce107e7889d207f5c84ab781a6e5d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fffb8b63f62a0a230bebe3013f0f14fc43c0ad7f5074383dfe22b63b08c92eb3\"" Jan 29 11:29:08.369171 containerd[1543]: time="2025-01-29T11:29:08.368943170Z" level=info msg="StartContainer for \"fffb8b63f62a0a230bebe3013f0f14fc43c0ad7f5074383dfe22b63b08c92eb3\"" Jan 29 11:29:08.391833 systemd[1]: Started cri-containerd-fffb8b63f62a0a230bebe3013f0f14fc43c0ad7f5074383dfe22b63b08c92eb3.scope - libcontainer container fffb8b63f62a0a230bebe3013f0f14fc43c0ad7f5074383dfe22b63b08c92eb3. Jan 29 11:29:08.419876 containerd[1543]: time="2025-01-29T11:29:08.419843940Z" level=info msg="StartContainer for \"fffb8b63f62a0a230bebe3013f0f14fc43c0ad7f5074383dfe22b63b08c92eb3\" returns successfully" Jan 29 11:29:09.343951 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2031143516.mount: Deactivated successfully. Jan 29 11:29:09.442060 kubelet[2780]: I0129 11:29:09.442007 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x4qfk" podStartSLOduration=2.435102507 podStartE2EDuration="2.435102507s" podCreationTimestamp="2025-01-29 11:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:29:09.422872069 +0000 UTC m=+6.208861358" watchObservedRunningTime="2025-01-29 11:29:09.435102507 +0000 UTC m=+6.221091798" Jan 29 11:29:09.723743 containerd[1543]: time="2025-01-29T11:29:09.723666858Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.36.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:09.724350 containerd[1543]: time="2025-01-29T11:29:09.724320805Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.36.2: active requests=0, bytes read=21762497" Jan 29 11:29:09.725035 containerd[1543]: time="2025-01-29T11:29:09.724657915Z" level=info msg="ImageCreate event name:\"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:09.726129 containerd[1543]: time="2025-01-29T11:29:09.726102194Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:09.726754 containerd[1543]: time="2025-01-29T11:29:09.726726741Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.36.2\" with image id \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\", repo tag \"quay.io/tigera/operator:v1.36.2\", repo digest \"quay.io/tigera/operator@sha256:fc9ea45f2475fd99db1b36d2ff180a50017b1a5ea0e82a171c6b439b3a620764\", size \"21758492\" in 1.556563736s" Jan 29 11:29:09.726799 containerd[1543]: time="2025-01-29T11:29:09.726757347Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.36.2\" returns image reference \"sha256:3045aa4a360d468ed15090f280e94c54bf4678269a6e863a9ebcf5b31534a346\"" Jan 29 11:29:09.744052 containerd[1543]: time="2025-01-29T11:29:09.744030208Z" level=info msg="CreateContainer within sandbox \"4c7e03f15c401e8518e76df6bd84397ea3449d1db3d4c6ab3a623e81d4312deb\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 29 11:29:09.756198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301682145.mount: Deactivated successfully. Jan 29 11:29:09.757590 containerd[1543]: time="2025-01-29T11:29:09.757566586Z" level=info msg="CreateContainer within sandbox \"4c7e03f15c401e8518e76df6bd84397ea3449d1db3d4c6ab3a623e81d4312deb\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"1de54f8f762e57dc77bbc6308cec9bac02385aef9cdeb9b28abc22f35b88864e\"" Jan 29 11:29:09.758378 containerd[1543]: time="2025-01-29T11:29:09.758249543Z" level=info msg="StartContainer for \"1de54f8f762e57dc77bbc6308cec9bac02385aef9cdeb9b28abc22f35b88864e\"" Jan 29 11:29:09.781847 systemd[1]: Started cri-containerd-1de54f8f762e57dc77bbc6308cec9bac02385aef9cdeb9b28abc22f35b88864e.scope - libcontainer container 1de54f8f762e57dc77bbc6308cec9bac02385aef9cdeb9b28abc22f35b88864e. Jan 29 11:29:09.806241 containerd[1543]: time="2025-01-29T11:29:09.806132558Z" level=info msg="StartContainer for \"1de54f8f762e57dc77bbc6308cec9bac02385aef9cdeb9b28abc22f35b88864e\" returns successfully" Jan 29 11:29:10.858252 kubelet[2780]: I0129 11:29:10.858202 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7d68577dc5-x5f5b" podStartSLOduration=2.289647294 podStartE2EDuration="3.851449846s" podCreationTimestamp="2025-01-29 11:29:07 +0000 UTC" firstStartedPulling="2025-01-29 11:29:08.169494763 +0000 UTC m=+4.955484050" lastFinishedPulling="2025-01-29 11:29:09.731297314 +0000 UTC m=+6.517286602" observedRunningTime="2025-01-29 11:29:10.529211313 +0000 UTC m=+7.315200613" watchObservedRunningTime="2025-01-29 11:29:10.851449846 +0000 UTC m=+7.637439185" Jan 29 11:29:12.963104 systemd[1]: Created slice kubepods-besteffort-podf3f914c6_1b09_4846_a62a_2fdf28eff449.slice - libcontainer container kubepods-besteffort-podf3f914c6_1b09_4846_a62a_2fdf28eff449.slice. Jan 29 11:29:12.968388 kubelet[2780]: I0129 11:29:12.965287 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/f3f914c6-1b09-4846-a62a-2fdf28eff449-typha-certs\") pod \"calico-typha-5d79fdb887-mvlfp\" (UID: \"f3f914c6-1b09-4846-a62a-2fdf28eff449\") " pod="calico-system/calico-typha-5d79fdb887-mvlfp" Jan 29 11:29:12.968388 kubelet[2780]: I0129 11:29:12.965305 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3f914c6-1b09-4846-a62a-2fdf28eff449-tigera-ca-bundle\") pod \"calico-typha-5d79fdb887-mvlfp\" (UID: \"f3f914c6-1b09-4846-a62a-2fdf28eff449\") " pod="calico-system/calico-typha-5d79fdb887-mvlfp" Jan 29 11:29:12.968388 kubelet[2780]: I0129 11:29:12.965317 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qzlkf\" (UniqueName: \"kubernetes.io/projected/f3f914c6-1b09-4846-a62a-2fdf28eff449-kube-api-access-qzlkf\") pod \"calico-typha-5d79fdb887-mvlfp\" (UID: \"f3f914c6-1b09-4846-a62a-2fdf28eff449\") " pod="calico-system/calico-typha-5d79fdb887-mvlfp" Jan 29 11:29:12.989557 systemd[1]: Created slice kubepods-besteffort-pod63afb8e6_1306_4a81_91ac_154d437f3dc5.slice - libcontainer container kubepods-besteffort-pod63afb8e6_1306_4a81_91ac_154d437f3dc5.slice. Jan 29 11:29:13.166932 kubelet[2780]: I0129 11:29:13.166899 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-policysync\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187799 kubelet[2780]: I0129 11:29:13.166939 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-xtables-lock\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187799 kubelet[2780]: I0129 11:29:13.166956 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-flexvol-driver-host\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187799 kubelet[2780]: I0129 11:29:13.166977 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/63afb8e6-1306-4a81-91ac-154d437f3dc5-node-certs\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187799 kubelet[2780]: I0129 11:29:13.166995 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8vwnf\" (UniqueName: \"kubernetes.io/projected/63afb8e6-1306-4a81-91ac-154d437f3dc5-kube-api-access-8vwnf\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187799 kubelet[2780]: I0129 11:29:13.167016 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-var-lib-calico\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187930 kubelet[2780]: I0129 11:29:13.167031 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-cni-net-dir\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187930 kubelet[2780]: I0129 11:29:13.167047 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-cni-log-dir\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187930 kubelet[2780]: I0129 11:29:13.167064 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63afb8e6-1306-4a81-91ac-154d437f3dc5-tigera-ca-bundle\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187930 kubelet[2780]: I0129 11:29:13.167081 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-var-run-calico\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.187930 kubelet[2780]: I0129 11:29:13.167101 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-lib-modules\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.194002 kubelet[2780]: I0129 11:29:13.167117 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/63afb8e6-1306-4a81-91ac-154d437f3dc5-cni-bin-dir\") pod \"calico-node-qkt58\" (UID: \"63afb8e6-1306-4a81-91ac-154d437f3dc5\") " pod="calico-system/calico-node-qkt58" Jan 29 11:29:13.216732 kubelet[2780]: E0129 11:29:13.216563 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:13.267737 kubelet[2780]: I0129 11:29:13.267707 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/ad861abe-5500-44ec-ad63-2a1f0ef1f899-registration-dir\") pod \"csi-node-driver-v9zp5\" (UID: \"ad861abe-5500-44ec-ad63-2a1f0ef1f899\") " pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:13.267737 kubelet[2780]: I0129 11:29:13.267744 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/ad861abe-5500-44ec-ad63-2a1f0ef1f899-socket-dir\") pod \"csi-node-driver-v9zp5\" (UID: \"ad861abe-5500-44ec-ad63-2a1f0ef1f899\") " pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:13.279997 kubelet[2780]: I0129 11:29:13.267795 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/ad861abe-5500-44ec-ad63-2a1f0ef1f899-varrun\") pod \"csi-node-driver-v9zp5\" (UID: \"ad861abe-5500-44ec-ad63-2a1f0ef1f899\") " pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:13.279997 kubelet[2780]: I0129 11:29:13.267832 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ad861abe-5500-44ec-ad63-2a1f0ef1f899-kubelet-dir\") pod \"csi-node-driver-v9zp5\" (UID: \"ad861abe-5500-44ec-ad63-2a1f0ef1f899\") " pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:13.279997 kubelet[2780]: I0129 11:29:13.267860 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zdwdt\" (UniqueName: \"kubernetes.io/projected/ad861abe-5500-44ec-ad63-2a1f0ef1f899-kube-api-access-zdwdt\") pod \"csi-node-driver-v9zp5\" (UID: \"ad861abe-5500-44ec-ad63-2a1f0ef1f899\") " pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:13.283865 containerd[1543]: time="2025-01-29T11:29:13.283840837Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d79fdb887-mvlfp,Uid:f3f914c6-1b09-4846-a62a-2fdf28eff449,Namespace:calico-system,Attempt:0,}" Jan 29 11:29:13.292397 containerd[1543]: time="2025-01-29T11:29:13.292368584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qkt58,Uid:63afb8e6-1306-4a81-91ac-154d437f3dc5,Namespace:calico-system,Attempt:0,}" Jan 29 11:29:13.368943 kubelet[2780]: E0129 11:29:13.368851 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.368943 kubelet[2780]: W0129 11:29:13.368876 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.368943 kubelet[2780]: E0129 11:29:13.368903 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.369308 kubelet[2780]: E0129 11:29:13.369083 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.369308 kubelet[2780]: W0129 11:29:13.369089 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.369308 kubelet[2780]: E0129 11:29:13.369100 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.369514 kubelet[2780]: E0129 11:29:13.369428 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.369514 kubelet[2780]: W0129 11:29:13.369440 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.369514 kubelet[2780]: E0129 11:29:13.369456 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.369692 kubelet[2780]: E0129 11:29:13.369582 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.369692 kubelet[2780]: W0129 11:29:13.369589 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.369841 kubelet[2780]: E0129 11:29:13.369765 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.370011 kubelet[2780]: E0129 11:29:13.369909 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.370011 kubelet[2780]: W0129 11:29:13.369916 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.370011 kubelet[2780]: E0129 11:29:13.369928 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.370113 kubelet[2780]: E0129 11:29:13.370053 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.370113 kubelet[2780]: W0129 11:29:13.370060 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.370113 kubelet[2780]: E0129 11:29:13.370069 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.370197 kubelet[2780]: E0129 11:29:13.370188 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.370197 kubelet[2780]: W0129 11:29:13.370193 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.370297 kubelet[2780]: E0129 11:29:13.370198 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.370394 kubelet[2780]: E0129 11:29:13.370378 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.370394 kubelet[2780]: W0129 11:29:13.370391 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.370454 kubelet[2780]: E0129 11:29:13.370401 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.370529 kubelet[2780]: E0129 11:29:13.370515 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.370529 kubelet[2780]: W0129 11:29:13.370526 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.370590 kubelet[2780]: E0129 11:29:13.370534 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.370650 kubelet[2780]: E0129 11:29:13.370638 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.370650 kubelet[2780]: W0129 11:29:13.370646 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.370712 kubelet[2780]: E0129 11:29:13.370651 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.370762 kubelet[2780]: E0129 11:29:13.370747 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.370762 kubelet[2780]: W0129 11:29:13.370758 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.370823 kubelet[2780]: E0129 11:29:13.370765 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.370941 kubelet[2780]: E0129 11:29:13.370908 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.370941 kubelet[2780]: W0129 11:29:13.370913 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.370941 kubelet[2780]: E0129 11:29:13.370919 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371025 kubelet[2780]: E0129 11:29:13.371017 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371025 kubelet[2780]: W0129 11:29:13.371023 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371080 kubelet[2780]: E0129 11:29:13.371031 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371139 kubelet[2780]: E0129 11:29:13.371118 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371139 kubelet[2780]: W0129 11:29:13.371122 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371139 kubelet[2780]: E0129 11:29:13.371129 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371223 kubelet[2780]: E0129 11:29:13.371212 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371223 kubelet[2780]: W0129 11:29:13.371216 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371223 kubelet[2780]: E0129 11:29:13.371222 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371323 kubelet[2780]: E0129 11:29:13.371311 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371323 kubelet[2780]: W0129 11:29:13.371317 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371323 kubelet[2780]: E0129 11:29:13.371322 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371428 kubelet[2780]: E0129 11:29:13.371417 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371428 kubelet[2780]: W0129 11:29:13.371423 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371487 kubelet[2780]: E0129 11:29:13.371430 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371520 kubelet[2780]: E0129 11:29:13.371505 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371520 kubelet[2780]: W0129 11:29:13.371509 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371520 kubelet[2780]: E0129 11:29:13.371513 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371597 kubelet[2780]: E0129 11:29:13.371586 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371597 kubelet[2780]: W0129 11:29:13.371591 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371671 kubelet[2780]: E0129 11:29:13.371598 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371726 kubelet[2780]: E0129 11:29:13.371713 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371726 kubelet[2780]: W0129 11:29:13.371722 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371786 kubelet[2780]: E0129 11:29:13.371729 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371826 kubelet[2780]: E0129 11:29:13.371816 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371826 kubelet[2780]: W0129 11:29:13.371823 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371891 kubelet[2780]: E0129 11:29:13.371832 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.371929 kubelet[2780]: E0129 11:29:13.371922 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.371929 kubelet[2780]: W0129 11:29:13.371926 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.371996 kubelet[2780]: E0129 11:29:13.371931 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.372029 kubelet[2780]: E0129 11:29:13.372005 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.372029 kubelet[2780]: W0129 11:29:13.372009 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.372029 kubelet[2780]: E0129 11:29:13.372015 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.372156 kubelet[2780]: E0129 11:29:13.372135 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.372156 kubelet[2780]: W0129 11:29:13.372142 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.372156 kubelet[2780]: E0129 11:29:13.372147 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.376330 kubelet[2780]: E0129 11:29:13.376243 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.376330 kubelet[2780]: W0129 11:29:13.376274 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.376330 kubelet[2780]: E0129 11:29:13.376294 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.401840 kubelet[2780]: E0129 11:29:13.401140 2780 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:29:13.401840 kubelet[2780]: W0129 11:29:13.401161 2780 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:29:13.401840 kubelet[2780]: E0129 11:29:13.401179 2780 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:29:13.681759 containerd[1543]: time="2025-01-29T11:29:13.681587762Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:13.681759 containerd[1543]: time="2025-01-29T11:29:13.681666603Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:13.681953 containerd[1543]: time="2025-01-29T11:29:13.681764345Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:13.681953 containerd[1543]: time="2025-01-29T11:29:13.681843375Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:13.695742 systemd[1]: Started cri-containerd-eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3.scope - libcontainer container eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3. Jan 29 11:29:13.719269 containerd[1543]: time="2025-01-29T11:29:13.718718351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qkt58,Uid:63afb8e6-1306-4a81-91ac-154d437f3dc5,Namespace:calico-system,Attempt:0,} returns sandbox id \"eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3\"" Jan 29 11:29:13.720585 containerd[1543]: time="2025-01-29T11:29:13.719955526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:29:13.740492 containerd[1543]: time="2025-01-29T11:29:13.740430527Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:13.740632 containerd[1543]: time="2025-01-29T11:29:13.740482922Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:13.740632 containerd[1543]: time="2025-01-29T11:29:13.740491715Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:13.740632 containerd[1543]: time="2025-01-29T11:29:13.740554441Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:13.760794 systemd[1]: Started cri-containerd-d8ac7f54e5a1c8525de9f7857fbdd42540bcccc2a4abb53101983353f16e4eec.scope - libcontainer container d8ac7f54e5a1c8525de9f7857fbdd42540bcccc2a4abb53101983353f16e4eec. Jan 29 11:29:13.789981 containerd[1543]: time="2025-01-29T11:29:13.789955865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5d79fdb887-mvlfp,Uid:f3f914c6-1b09-4846-a62a-2fdf28eff449,Namespace:calico-system,Attempt:0,} returns sandbox id \"d8ac7f54e5a1c8525de9f7857fbdd42540bcccc2a4abb53101983353f16e4eec\"" Jan 29 11:29:15.219513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3494802268.mount: Deactivated successfully. Jan 29 11:29:15.336929 kubelet[2780]: E0129 11:29:15.336479 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:15.349710 containerd[1543]: time="2025-01-29T11:29:15.349603381Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:15.350259 containerd[1543]: time="2025-01-29T11:29:15.350206719Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:29:15.350581 containerd[1543]: time="2025-01-29T11:29:15.350497604Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:15.351865 containerd[1543]: time="2025-01-29T11:29:15.351844342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:15.352579 containerd[1543]: time="2025-01-29T11:29:15.352479369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.632504014s" Jan 29 11:29:15.352579 containerd[1543]: time="2025-01-29T11:29:15.352498707Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:29:15.353498 containerd[1543]: time="2025-01-29T11:29:15.353285876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:29:15.354217 containerd[1543]: time="2025-01-29T11:29:15.354086965Z" level=info msg="CreateContainer within sandbox \"eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:29:15.370740 containerd[1543]: time="2025-01-29T11:29:15.370696799Z" level=info msg="CreateContainer within sandbox \"eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2\"" Jan 29 11:29:15.377836 containerd[1543]: time="2025-01-29T11:29:15.377170251Z" level=info msg="StartContainer for \"26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2\"" Jan 29 11:29:15.402766 systemd[1]: Started cri-containerd-26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2.scope - libcontainer container 26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2. Jan 29 11:29:15.439489 containerd[1543]: time="2025-01-29T11:29:15.439443401Z" level=info msg="StartContainer for \"26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2\" returns successfully" Jan 29 11:29:15.443010 systemd[1]: cri-containerd-26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2.scope: Deactivated successfully. Jan 29 11:29:15.747165 containerd[1543]: time="2025-01-29T11:29:15.738067671Z" level=info msg="shim disconnected" id=26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2 namespace=k8s.io Jan 29 11:29:15.747165 containerd[1543]: time="2025-01-29T11:29:15.747094196Z" level=warning msg="cleaning up after shim disconnected" id=26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2 namespace=k8s.io Jan 29 11:29:15.747165 containerd[1543]: time="2025-01-29T11:29:15.747101837Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:29:16.192822 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-26df9bca21955e2f6977d8037d6f0df013fbb5f1d3a138878cff3fa866971bc2-rootfs.mount: Deactivated successfully. Jan 29 11:29:17.337444 kubelet[2780]: E0129 11:29:17.336572 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:17.656169 containerd[1543]: time="2025-01-29T11:29:17.655422955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:17.656169 containerd[1543]: time="2025-01-29T11:29:17.655877007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 11:29:17.658362 containerd[1543]: time="2025-01-29T11:29:17.657845325Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.304538465s" Jan 29 11:29:17.658362 containerd[1543]: time="2025-01-29T11:29:17.657887682Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:29:17.660682 containerd[1543]: time="2025-01-29T11:29:17.660240049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:29:17.660836 containerd[1543]: time="2025-01-29T11:29:17.660818434Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:17.670905 containerd[1543]: time="2025-01-29T11:29:17.670880699Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:17.690737 containerd[1543]: time="2025-01-29T11:29:17.690715066Z" level=info msg="CreateContainer within sandbox \"d8ac7f54e5a1c8525de9f7857fbdd42540bcccc2a4abb53101983353f16e4eec\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:29:17.697660 containerd[1543]: time="2025-01-29T11:29:17.697600002Z" level=info msg="CreateContainer within sandbox \"d8ac7f54e5a1c8525de9f7857fbdd42540bcccc2a4abb53101983353f16e4eec\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"3e75057cc6065f2fcdd12132642061333b62071223ca64d40f90ef8e73fb5309\"" Jan 29 11:29:17.698657 containerd[1543]: time="2025-01-29T11:29:17.698611353Z" level=info msg="StartContainer for \"3e75057cc6065f2fcdd12132642061333b62071223ca64d40f90ef8e73fb5309\"" Jan 29 11:29:17.754823 systemd[1]: Started cri-containerd-3e75057cc6065f2fcdd12132642061333b62071223ca64d40f90ef8e73fb5309.scope - libcontainer container 3e75057cc6065f2fcdd12132642061333b62071223ca64d40f90ef8e73fb5309. Jan 29 11:29:17.800753 containerd[1543]: time="2025-01-29T11:29:17.800574537Z" level=info msg="StartContainer for \"3e75057cc6065f2fcdd12132642061333b62071223ca64d40f90ef8e73fb5309\" returns successfully" Jan 29 11:29:19.341471 kubelet[2780]: E0129 11:29:19.341437 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:19.633390 kubelet[2780]: I0129 11:29:19.633315 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:29:21.336673 kubelet[2780]: E0129 11:29:21.336278 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:22.784641 containerd[1543]: time="2025-01-29T11:29:22.784366537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:22.789090 containerd[1543]: time="2025-01-29T11:29:22.785187085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:29:22.789090 containerd[1543]: time="2025-01-29T11:29:22.787086085Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:22.789601 containerd[1543]: time="2025-01-29T11:29:22.789041791Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.128771521s" Jan 29 11:29:22.789601 containerd[1543]: time="2025-01-29T11:29:22.789330956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:29:22.790217 containerd[1543]: time="2025-01-29T11:29:22.789660610Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:22.792434 containerd[1543]: time="2025-01-29T11:29:22.792396166Z" level=info msg="CreateContainer within sandbox \"eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:29:22.800294 containerd[1543]: time="2025-01-29T11:29:22.800258720Z" level=info msg="CreateContainer within sandbox \"eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5\"" Jan 29 11:29:22.801825 containerd[1543]: time="2025-01-29T11:29:22.800768739Z" level=info msg="StartContainer for \"45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5\"" Jan 29 11:29:22.858810 systemd[1]: Started cri-containerd-45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5.scope - libcontainer container 45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5. Jan 29 11:29:22.889148 kubelet[2780]: I0129 11:29:22.889124 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:29:22.897114 containerd[1543]: time="2025-01-29T11:29:22.897058667Z" level=info msg="StartContainer for \"45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5\" returns successfully" Jan 29 11:29:22.914953 kubelet[2780]: I0129 11:29:22.914912 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5d79fdb887-mvlfp" podStartSLOduration=7.045563655 podStartE2EDuration="10.914900265s" podCreationTimestamp="2025-01-29 11:29:12 +0000 UTC" firstStartedPulling="2025-01-29 11:29:13.790548736 +0000 UTC m=+10.576538022" lastFinishedPulling="2025-01-29 11:29:17.659885345 +0000 UTC m=+14.445874632" observedRunningTime="2025-01-29 11:29:18.659228907 +0000 UTC m=+15.445218210" watchObservedRunningTime="2025-01-29 11:29:22.914900265 +0000 UTC m=+19.700889556" Jan 29 11:29:23.337402 kubelet[2780]: E0129 11:29:23.337185 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:25.336771 kubelet[2780]: E0129 11:29:25.336514 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:26.503533 systemd[1]: cri-containerd-45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5.scope: Deactivated successfully. Jan 29 11:29:26.524865 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5-rootfs.mount: Deactivated successfully. Jan 29 11:29:26.624085 kubelet[2780]: I0129 11:29:26.624047 2780 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Jan 29 11:29:26.841194 systemd[1]: Created slice kubepods-besteffort-pod5c59bfc6_a9c3_4ac8_8013_4f73cd38040c.slice - libcontainer container kubepods-besteffort-pod5c59bfc6_a9c3_4ac8_8013_4f73cd38040c.slice. Jan 29 11:29:26.854761 kubelet[2780]: I0129 11:29:26.854704 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-867gz\" (UniqueName: \"kubernetes.io/projected/5c59bfc6-a9c3-4ac8-8013-4f73cd38040c-kube-api-access-867gz\") pod \"calico-apiserver-f8fc55c6c-8vp99\" (UID: \"5c59bfc6-a9c3-4ac8-8013-4f73cd38040c\") " pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:26.854761 kubelet[2780]: I0129 11:29:26.854731 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5c59bfc6-a9c3-4ac8-8013-4f73cd38040c-calico-apiserver-certs\") pod \"calico-apiserver-f8fc55c6c-8vp99\" (UID: \"5c59bfc6-a9c3-4ac8-8013-4f73cd38040c\") " pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:26.911769 kubelet[2780]: I0129 11:29:26.873115 2780 status_manager.go:890] "Failed to get status for pod" podUID="5c59bfc6-a9c3-4ac8-8013-4f73cd38040c" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" err="pods \"calico-apiserver-f8fc55c6c-8vp99\" is forbidden: User \"system:node:localhost\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node 'localhost' and this object" Jan 29 11:29:26.865244 systemd[1]: Created slice kubepods-besteffort-poda3a7fc11_29e2_44d1_b25b_a10745cff7e7.slice - libcontainer container kubepods-besteffort-poda3a7fc11_29e2_44d1_b25b_a10745cff7e7.slice. Jan 29 11:29:26.867677 systemd[1]: Created slice kubepods-burstable-podd903c164_6164_4123_875b_c2120fb387c7.slice - libcontainer container kubepods-burstable-podd903c164_6164_4123_875b_c2120fb387c7.slice. Jan 29 11:29:26.871520 systemd[1]: Created slice kubepods-besteffort-podd14ebb17_ee65_46b9_8ef9_c5441011f365.slice - libcontainer container kubepods-besteffort-podd14ebb17_ee65_46b9_8ef9_c5441011f365.slice. Jan 29 11:29:26.875263 systemd[1]: Created slice kubepods-burstable-podc5612029_5997_4071_9364_2b02408150e5.slice - libcontainer container kubepods-burstable-podc5612029_5997_4071_9364_2b02408150e5.slice. Jan 29 11:29:26.954910 kubelet[2780]: I0129 11:29:26.954812 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fjc6w\" (UniqueName: \"kubernetes.io/projected/c5612029-5997-4071-9364-2b02408150e5-kube-api-access-fjc6w\") pod \"coredns-668d6bf9bc-z5wzg\" (UID: \"c5612029-5997-4071-9364-2b02408150e5\") " pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:26.954910 kubelet[2780]: I0129 11:29:26.954845 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl2sr\" (UniqueName: \"kubernetes.io/projected/a3a7fc11-29e2-44d1-b25b-a10745cff7e7-kube-api-access-wl2sr\") pod \"calico-kube-controllers-6bccbcd89f-p65ch\" (UID: \"a3a7fc11-29e2-44d1-b25b-a10745cff7e7\") " pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:26.954910 kubelet[2780]: I0129 11:29:26.954857 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-drzl4\" (UniqueName: \"kubernetes.io/projected/d14ebb17-ee65-46b9-8ef9-c5441011f365-kube-api-access-drzl4\") pod \"calico-apiserver-f8fc55c6c-cvj7f\" (UID: \"d14ebb17-ee65-46b9-8ef9-c5441011f365\") " pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:26.954910 kubelet[2780]: I0129 11:29:26.954870 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5612029-5997-4071-9364-2b02408150e5-config-volume\") pod \"coredns-668d6bf9bc-z5wzg\" (UID: \"c5612029-5997-4071-9364-2b02408150e5\") " pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:26.954910 kubelet[2780]: I0129 11:29:26.954891 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d903c164-6164-4123-875b-c2120fb387c7-config-volume\") pod \"coredns-668d6bf9bc-v7xft\" (UID: \"d903c164-6164-4123-875b-c2120fb387c7\") " pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:26.955746 kubelet[2780]: I0129 11:29:26.954919 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a3a7fc11-29e2-44d1-b25b-a10745cff7e7-tigera-ca-bundle\") pod \"calico-kube-controllers-6bccbcd89f-p65ch\" (UID: \"a3a7fc11-29e2-44d1-b25b-a10745cff7e7\") " pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:26.955746 kubelet[2780]: I0129 11:29:26.954930 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d14ebb17-ee65-46b9-8ef9-c5441011f365-calico-apiserver-certs\") pod \"calico-apiserver-f8fc55c6c-cvj7f\" (UID: \"d14ebb17-ee65-46b9-8ef9-c5441011f365\") " pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:26.955746 kubelet[2780]: I0129 11:29:26.954942 2780 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pclzm\" (UniqueName: \"kubernetes.io/projected/d903c164-6164-4123-875b-c2120fb387c7-kube-api-access-pclzm\") pod \"coredns-668d6bf9bc-v7xft\" (UID: \"d903c164-6164-4123-875b-c2120fb387c7\") " pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:27.144465 containerd[1543]: time="2025-01-29T11:29:27.144333178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:29:27.169542 containerd[1543]: time="2025-01-29T11:29:27.169361152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:0,}" Jan 29 11:29:27.169542 containerd[1543]: time="2025-01-29T11:29:27.169440466Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:0,}" Jan 29 11:29:27.174186 containerd[1543]: time="2025-01-29T11:29:27.174148453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:0,}" Jan 29 11:29:27.179080 containerd[1543]: time="2025-01-29T11:29:27.176796492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:0,}" Jan 29 11:29:27.342221 systemd[1]: Created slice kubepods-besteffort-podad861abe_5500_44ec_ad63_2a1f0ef1f899.slice - libcontainer container kubepods-besteffort-podad861abe_5500_44ec_ad63_2a1f0ef1f899.slice. Jan 29 11:29:27.373382 containerd[1543]: time="2025-01-29T11:29:27.343995144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:0,}" Jan 29 11:29:27.493535 containerd[1543]: time="2025-01-29T11:29:27.493446707Z" level=info msg="shim disconnected" id=45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5 namespace=k8s.io Jan 29 11:29:27.493535 containerd[1543]: time="2025-01-29T11:29:27.493483446Z" level=warning msg="cleaning up after shim disconnected" id=45720cf972f97819ed4e119ed75407597862f481d2f3c8882dffcb6bfe8cd2b5 namespace=k8s.io Jan 29 11:29:27.493535 containerd[1543]: time="2025-01-29T11:29:27.493489246Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:29:27.727734 containerd[1543]: time="2025-01-29T11:29:27.727690632Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:29:29.037788 containerd[1543]: time="2025-01-29T11:29:29.037673840Z" level=error msg="Failed to destroy network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.042629 containerd[1543]: time="2025-01-29T11:29:29.040529801Z" level=error msg="Failed to destroy network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.041068 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd-shm.mount: Deactivated successfully. Jan 29 11:29:29.046298 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500-shm.mount: Deactivated successfully. Jan 29 11:29:29.050144 containerd[1543]: time="2025-01-29T11:29:29.046037366Z" level=error msg="encountered an error cleaning up failed sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.050144 containerd[1543]: time="2025-01-29T11:29:29.046103108Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.050144 containerd[1543]: time="2025-01-29T11:29:29.047702656Z" level=error msg="encountered an error cleaning up failed sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.050144 containerd[1543]: time="2025-01-29T11:29:29.047755643Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.055363 containerd[1543]: time="2025-01-29T11:29:29.055332807Z" level=error msg="Failed to destroy network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.055843 containerd[1543]: time="2025-01-29T11:29:29.055790520Z" level=error msg="encountered an error cleaning up failed sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.055843 containerd[1543]: time="2025-01-29T11:29:29.055835504Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.058155 kubelet[2780]: E0129 11:29:29.057257 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.058155 kubelet[2780]: E0129 11:29:29.057319 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:29.058155 kubelet[2780]: E0129 11:29:29.057340 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:29.058495 kubelet[2780]: E0129 11:29:29.057378 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" podUID="d14ebb17-ee65-46b9-8ef9-c5441011f365" Jan 29 11:29:29.058495 kubelet[2780]: E0129 11:29:29.057582 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.058495 kubelet[2780]: E0129 11:29:29.057603 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:29.059366 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3-shm.mount: Deactivated successfully. Jan 29 11:29:29.061018 kubelet[2780]: E0129 11:29:29.059423 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.061018 kubelet[2780]: E0129 11:29:29.059478 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:29.061018 kubelet[2780]: E0129 11:29:29.059498 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:29.061525 kubelet[2780]: E0129 11:29:29.059535 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" podUID="5c59bfc6-a9c3-4ac8-8013-4f73cd38040c" Jan 29 11:29:29.061525 kubelet[2780]: E0129 11:29:29.060665 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:29.061525 kubelet[2780]: E0129 11:29:29.060738 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z5wzg_kube-system(c5612029-5997-4071-9364-2b02408150e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z5wzg_kube-system(c5612029-5997-4071-9364-2b02408150e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z5wzg" podUID="c5612029-5997-4071-9364-2b02408150e5" Jan 29 11:29:29.063031 containerd[1543]: time="2025-01-29T11:29:29.062988644Z" level=error msg="Failed to destroy network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.066094 containerd[1543]: time="2025-01-29T11:29:29.065643681Z" level=error msg="encountered an error cleaning up failed sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.066094 containerd[1543]: time="2025-01-29T11:29:29.065714311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.066229 kubelet[2780]: E0129 11:29:29.065875 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.066229 kubelet[2780]: E0129 11:29:29.065919 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:29.066229 kubelet[2780]: E0129 11:29:29.065938 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:29.067289 kubelet[2780]: E0129 11:29:29.065976 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v7xft" podUID="d903c164-6164-4123-875b-c2120fb387c7" Jan 29 11:29:29.068135 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658-shm.mount: Deactivated successfully. Jan 29 11:29:29.076079 containerd[1543]: time="2025-01-29T11:29:29.076033992Z" level=error msg="Failed to destroy network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.076550 containerd[1543]: time="2025-01-29T11:29:29.076522380Z" level=error msg="encountered an error cleaning up failed sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.076636 containerd[1543]: time="2025-01-29T11:29:29.076596383Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.077211 kubelet[2780]: E0129 11:29:29.076830 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.077211 kubelet[2780]: E0129 11:29:29.076889 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:29.077211 kubelet[2780]: E0129 11:29:29.076911 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:29.077342 kubelet[2780]: E0129 11:29:29.076953 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bccbcd89f-p65ch_calico-system(a3a7fc11-29e2-44d1-b25b-a10745cff7e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bccbcd89f-p65ch_calico-system(a3a7fc11-29e2-44d1-b25b-a10745cff7e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" podUID="a3a7fc11-29e2-44d1-b25b-a10745cff7e7" Jan 29 11:29:29.082365 containerd[1543]: time="2025-01-29T11:29:29.082138053Z" level=error msg="Failed to destroy network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.082951 containerd[1543]: time="2025-01-29T11:29:29.082806792Z" level=error msg="encountered an error cleaning up failed sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.082951 containerd[1543]: time="2025-01-29T11:29:29.082886677Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.083213 kubelet[2780]: E0129 11:29:29.083177 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:29.083430 kubelet[2780]: E0129 11:29:29.083307 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:29.083430 kubelet[2780]: E0129 11:29:29.083336 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:29.083430 kubelet[2780]: E0129 11:29:29.083398 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:29.525421 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35-shm.mount: Deactivated successfully. Jan 29 11:29:29.525744 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862-shm.mount: Deactivated successfully. Jan 29 11:29:29.727007 kubelet[2780]: I0129 11:29:29.726750 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500" Jan 29 11:29:29.728959 kubelet[2780]: I0129 11:29:29.728939 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3" Jan 29 11:29:29.875549 kubelet[2780]: I0129 11:29:29.875525 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658" Jan 29 11:29:29.877027 kubelet[2780]: I0129 11:29:29.877002 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35" Jan 29 11:29:29.877912 kubelet[2780]: I0129 11:29:29.877897 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862" Jan 29 11:29:29.878440 kubelet[2780]: I0129 11:29:29.878426 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd" Jan 29 11:29:30.291446 containerd[1543]: time="2025-01-29T11:29:30.291203698Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" Jan 29 11:29:30.291446 containerd[1543]: time="2025-01-29T11:29:30.291421204Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\"" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.292031255Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.292108058Z" level=info msg="Ensure that sandbox 1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658 in task-service has been cleanup successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.292132413Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.292222864Z" level=info msg="Ensure that sandbox b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3 in task-service has been cleanup successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.292237962Z" level=info msg="TearDown network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.292247501Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" returns successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.291203698Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.292363017Z" level=info msg="Ensure that sandbox 278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35 in task-service has been cleanup successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.293732214Z" level=info msg="Ensure that sandbox eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500 in task-service has been cleanup successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.292114068Z" level=info msg="Ensure that sandbox 7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd in task-service has been cleanup successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295062872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:1,}" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.291216246Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\"" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295420040Z" level=info msg="Ensure that sandbox 0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862 in task-service has been cleanup successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295516957Z" level=info msg="TearDown network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295526732Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" returns successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295544154Z" level=info msg="TearDown network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295556067Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" returns successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295742308Z" level=info msg="TearDown network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295751325Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" returns successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295764342Z" level=info msg="TearDown network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295773678Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" returns successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295796604Z" level=info msg="TearDown network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295802041Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" returns successfully" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.295862183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.296053344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:1,}" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.296157310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:1,}" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.296350739Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:1,}" Jan 29 11:29:30.298566 containerd[1543]: time="2025-01-29T11:29:30.296447840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:1,}" Jan 29 11:29:30.293464 systemd[1]: run-netns-cni\x2dd51cf738\x2d6b8d\x2d2367\x2d8864\x2d3861a8108544.mount: Deactivated successfully. Jan 29 11:29:30.293515 systemd[1]: run-netns-cni\x2d69c6a1ce\x2d4ff3\x2dacc6\x2d1711\x2d7c94cf18ee14.mount: Deactivated successfully. Jan 29 11:29:30.293548 systemd[1]: run-netns-cni\x2d2bda5462\x2da309\x2d9c66\x2d07e0\x2dcb5151770124.mount: Deactivated successfully. Jan 29 11:29:30.297065 systemd[1]: run-netns-cni\x2dfefd29e8\x2d9243\x2d397c\x2d3d71\x2dfd10a9cd3e22.mount: Deactivated successfully. Jan 29 11:29:30.298969 systemd[1]: run-netns-cni\x2df3cfca65\x2d15a2\x2d7601\x2d1b0f\x2d77d52380cb04.mount: Deactivated successfully. Jan 29 11:29:30.299017 systemd[1]: run-netns-cni\x2d3d24803d\x2dfe84\x2d4ce7\x2de75b\x2d51e9f34d4f7d.mount: Deactivated successfully. Jan 29 11:29:31.898457 containerd[1543]: time="2025-01-29T11:29:31.898410332Z" level=error msg="Failed to destroy network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.898885 containerd[1543]: time="2025-01-29T11:29:31.898730979Z" level=error msg="encountered an error cleaning up failed sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.898885 containerd[1543]: time="2025-01-29T11:29:31.898780478Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.899637 kubelet[2780]: E0129 11:29:31.898998 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.899637 kubelet[2780]: E0129 11:29:31.899035 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:31.899637 kubelet[2780]: E0129 11:29:31.899050 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:31.899869 kubelet[2780]: E0129 11:29:31.899080 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v7xft" podUID="d903c164-6164-4123-875b-c2120fb387c7" Jan 29 11:29:31.922638 containerd[1543]: time="2025-01-29T11:29:31.922213783Z" level=error msg="Failed to destroy network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.924761 containerd[1543]: time="2025-01-29T11:29:31.924735877Z" level=error msg="encountered an error cleaning up failed sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.924833 containerd[1543]: time="2025-01-29T11:29:31.924782626Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.925089 kubelet[2780]: E0129 11:29:31.924913 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.925089 kubelet[2780]: E0129 11:29:31.924953 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:31.925089 kubelet[2780]: E0129 11:29:31.924969 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:31.926054 kubelet[2780]: E0129 11:29:31.924998 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" podUID="d14ebb17-ee65-46b9-8ef9-c5441011f365" Jan 29 11:29:31.928735 kubelet[2780]: I0129 11:29:31.928309 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865" Jan 29 11:29:31.929224 containerd[1543]: time="2025-01-29T11:29:31.929204615Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\"" Jan 29 11:29:31.929496 containerd[1543]: time="2025-01-29T11:29:31.929375864Z" level=info msg="Ensure that sandbox fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865 in task-service has been cleanup successfully" Jan 29 11:29:31.929542 containerd[1543]: time="2025-01-29T11:29:31.929526714Z" level=info msg="TearDown network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" successfully" Jan 29 11:29:31.929963 containerd[1543]: time="2025-01-29T11:29:31.929919402Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" returns successfully" Jan 29 11:29:31.930302 containerd[1543]: time="2025-01-29T11:29:31.930283527Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" Jan 29 11:29:31.930393 containerd[1543]: time="2025-01-29T11:29:31.930383394Z" level=info msg="TearDown network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" successfully" Jan 29 11:29:31.930508 containerd[1543]: time="2025-01-29T11:29:31.930422626Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" returns successfully" Jan 29 11:29:31.930681 kubelet[2780]: I0129 11:29:31.930540 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d" Jan 29 11:29:31.930824 containerd[1543]: time="2025-01-29T11:29:31.930798818Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\"" Jan 29 11:29:31.931017 containerd[1543]: time="2025-01-29T11:29:31.931005039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:29:31.931449 containerd[1543]: time="2025-01-29T11:29:31.931395279Z" level=info msg="Ensure that sandbox 98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d in task-service has been cleanup successfully" Jan 29 11:29:31.931884 containerd[1543]: time="2025-01-29T11:29:31.931873865Z" level=info msg="TearDown network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" successfully" Jan 29 11:29:31.931991 containerd[1543]: time="2025-01-29T11:29:31.931943335Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" returns successfully" Jan 29 11:29:31.932494 containerd[1543]: time="2025-01-29T11:29:31.932414098Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" Jan 29 11:29:31.932654 containerd[1543]: time="2025-01-29T11:29:31.932477833Z" level=info msg="TearDown network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" successfully" Jan 29 11:29:31.932654 containerd[1543]: time="2025-01-29T11:29:31.932611207Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" returns successfully" Jan 29 11:29:31.932956 containerd[1543]: time="2025-01-29T11:29:31.932911011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:2,}" Jan 29 11:29:31.949631 containerd[1543]: time="2025-01-29T11:29:31.949545151Z" level=error msg="Failed to destroy network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.950012 containerd[1543]: time="2025-01-29T11:29:31.949891237Z" level=error msg="encountered an error cleaning up failed sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.950012 containerd[1543]: time="2025-01-29T11:29:31.949932257Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.950295 kubelet[2780]: E0129 11:29:31.950068 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.950295 kubelet[2780]: E0129 11:29:31.950117 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:31.950295 kubelet[2780]: E0129 11:29:31.950133 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:31.950381 kubelet[2780]: E0129 11:29:31.950163 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bccbcd89f-p65ch_calico-system(a3a7fc11-29e2-44d1-b25b-a10745cff7e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bccbcd89f-p65ch_calico-system(a3a7fc11-29e2-44d1-b25b-a10745cff7e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" podUID="a3a7fc11-29e2-44d1-b25b-a10745cff7e7" Jan 29 11:29:31.955237 containerd[1543]: time="2025-01-29T11:29:31.954596051Z" level=error msg="Failed to destroy network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.955833 containerd[1543]: time="2025-01-29T11:29:31.955805371Z" level=error msg="encountered an error cleaning up failed sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.956655 containerd[1543]: time="2025-01-29T11:29:31.956631069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.956993 kubelet[2780]: E0129 11:29:31.956868 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.956993 kubelet[2780]: E0129 11:29:31.956906 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:31.956993 kubelet[2780]: E0129 11:29:31.956920 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:31.957079 kubelet[2780]: E0129 11:29:31.956949 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z5wzg_kube-system(c5612029-5997-4071-9364-2b02408150e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z5wzg_kube-system(c5612029-5997-4071-9364-2b02408150e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z5wzg" podUID="c5612029-5997-4071-9364-2b02408150e5" Jan 29 11:29:31.957691 containerd[1543]: time="2025-01-29T11:29:31.957527730Z" level=error msg="Failed to destroy network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.957964 containerd[1543]: time="2025-01-29T11:29:31.957946754Z" level=error msg="encountered an error cleaning up failed sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.957999 containerd[1543]: time="2025-01-29T11:29:31.957983113Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.958074 containerd[1543]: time="2025-01-29T11:29:31.958055544Z" level=error msg="Failed to destroy network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.958246 containerd[1543]: time="2025-01-29T11:29:31.958230513Z" level=error msg="encountered an error cleaning up failed sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.958275 containerd[1543]: time="2025-01-29T11:29:31.958258499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:1,} failed, error" error="failed to setup network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.958600 kubelet[2780]: E0129 11:29:31.958364 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.958600 kubelet[2780]: E0129 11:29:31.958399 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:31.958600 kubelet[2780]: E0129 11:29:31.958425 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:31.958600 kubelet[2780]: E0129 11:29:31.958444 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:31.958739 kubelet[2780]: E0129 11:29:31.958479 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:31.958739 kubelet[2780]: E0129 11:29:31.958513 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:31.958739 kubelet[2780]: E0129 11:29:31.958524 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:31.958815 kubelet[2780]: E0129 11:29:31.958545 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" podUID="5c59bfc6-a9c3-4ac8-8013-4f73cd38040c" Jan 29 11:29:32.278358 containerd[1543]: time="2025-01-29T11:29:32.278187714Z" level=error msg="Failed to destroy network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:32.278924 containerd[1543]: time="2025-01-29T11:29:32.278903296Z" level=error msg="encountered an error cleaning up failed sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:32.278957 containerd[1543]: time="2025-01-29T11:29:32.278949079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:32.279556 kubelet[2780]: E0129 11:29:32.279393 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:32.279556 kubelet[2780]: E0129 11:29:32.279431 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:32.279556 kubelet[2780]: E0129 11:29:32.279445 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:32.280499 kubelet[2780]: E0129 11:29:32.279480 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v7xft" podUID="d903c164-6164-4123-875b-c2120fb387c7" Jan 29 11:29:32.384867 containerd[1543]: time="2025-01-29T11:29:32.384799265Z" level=error msg="Failed to destroy network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:32.385172 containerd[1543]: time="2025-01-29T11:29:32.385110469Z" level=error msg="encountered an error cleaning up failed sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:32.385172 containerd[1543]: time="2025-01-29T11:29:32.385148312Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:32.385500 kubelet[2780]: E0129 11:29:32.385475 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:32.385540 kubelet[2780]: E0129 11:29:32.385514 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:32.385540 kubelet[2780]: E0129 11:29:32.385530 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:32.385578 kubelet[2780]: E0129 11:29:32.385559 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" podUID="d14ebb17-ee65-46b9-8ef9-c5441011f365" Jan 29 11:29:32.731773 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8-shm.mount: Deactivated successfully. Jan 29 11:29:32.732077 systemd[1]: run-netns-cni\x2df904f496\x2dda1d\x2d7b60\x2d77d3\x2d786986214e82.mount: Deactivated successfully. Jan 29 11:29:32.732196 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865-shm.mount: Deactivated successfully. Jan 29 11:29:32.732304 systemd[1]: run-netns-cni\x2d7642c788\x2d2007\x2ddae0\x2dfd65\x2d634cecec1131.mount: Deactivated successfully. Jan 29 11:29:32.732403 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d-shm.mount: Deactivated successfully. Jan 29 11:29:33.059684 kubelet[2780]: I0129 11:29:33.059589 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8" Jan 29 11:29:33.061039 containerd[1543]: time="2025-01-29T11:29:33.061017070Z" level=info msg="StopPodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\"" Jan 29 11:29:33.061219 containerd[1543]: time="2025-01-29T11:29:33.061140240Z" level=info msg="Ensure that sandbox 64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8 in task-service has been cleanup successfully" Jan 29 11:29:33.063011 containerd[1543]: time="2025-01-29T11:29:33.062947582Z" level=info msg="TearDown network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" successfully" Jan 29 11:29:33.063011 containerd[1543]: time="2025-01-29T11:29:33.062990430Z" level=info msg="StopPodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" returns successfully" Jan 29 11:29:33.063426 containerd[1543]: time="2025-01-29T11:29:33.063409380Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\"" Jan 29 11:29:33.063492 containerd[1543]: time="2025-01-29T11:29:33.063476845Z" level=info msg="TearDown network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" successfully" Jan 29 11:29:33.064751 containerd[1543]: time="2025-01-29T11:29:33.063489880Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" returns successfully" Jan 29 11:29:33.064791 kubelet[2780]: I0129 11:29:33.064738 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85" Jan 29 11:29:33.064400 systemd[1]: run-netns-cni\x2d99a1f5f2\x2df126\x2d0a34\x2dcef4\x2da444372f87d3.mount: Deactivated successfully. Jan 29 11:29:33.065033 containerd[1543]: time="2025-01-29T11:29:33.065002038Z" level=info msg="StopPodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\"" Jan 29 11:29:33.065126 containerd[1543]: time="2025-01-29T11:29:33.065112687Z" level=info msg="Ensure that sandbox 3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85 in task-service has been cleanup successfully" Jan 29 11:29:33.065273 containerd[1543]: time="2025-01-29T11:29:33.065244500Z" level=info msg="TearDown network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" successfully" Jan 29 11:29:33.065273 containerd[1543]: time="2025-01-29T11:29:33.065255103Z" level=info msg="StopPodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" returns successfully" Jan 29 11:29:33.065392 containerd[1543]: time="2025-01-29T11:29:33.065379559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:2,}" Jan 29 11:29:33.068230 systemd[1]: run-netns-cni\x2d5d92bcda\x2d2b42\x2df06c\x2d5d49\x2d73ae6e43caf2.mount: Deactivated successfully. Jan 29 11:29:33.080188 containerd[1543]: time="2025-01-29T11:29:33.080145558Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\"" Jan 29 11:29:33.080252 containerd[1543]: time="2025-01-29T11:29:33.080203417Z" level=info msg="TearDown network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" successfully" Jan 29 11:29:33.080252 containerd[1543]: time="2025-01-29T11:29:33.080212838Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" returns successfully" Jan 29 11:29:33.080803 containerd[1543]: time="2025-01-29T11:29:33.080513424Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" Jan 29 11:29:33.080803 containerd[1543]: time="2025-01-29T11:29:33.080564117Z" level=info msg="TearDown network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" successfully" Jan 29 11:29:33.080803 containerd[1543]: time="2025-01-29T11:29:33.080571318Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" returns successfully" Jan 29 11:29:33.081123 containerd[1543]: time="2025-01-29T11:29:33.081071064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:29:33.094410 kubelet[2780]: I0129 11:29:33.094379 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690" Jan 29 11:29:33.119160 containerd[1543]: time="2025-01-29T11:29:33.118892741Z" level=info msg="StopPodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\"" Jan 29 11:29:33.119160 containerd[1543]: time="2025-01-29T11:29:33.119029933Z" level=info msg="Ensure that sandbox dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690 in task-service has been cleanup successfully" Jan 29 11:29:33.121047 containerd[1543]: time="2025-01-29T11:29:33.120953652Z" level=info msg="TearDown network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" successfully" Jan 29 11:29:33.121047 containerd[1543]: time="2025-01-29T11:29:33.120965351Z" level=info msg="StopPodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" returns successfully" Jan 29 11:29:33.121517 containerd[1543]: time="2025-01-29T11:29:33.121465948Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\"" Jan 29 11:29:33.121562 containerd[1543]: time="2025-01-29T11:29:33.121507921Z" level=info msg="TearDown network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" successfully" Jan 29 11:29:33.121649 containerd[1543]: time="2025-01-29T11:29:33.121596142Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" returns successfully" Jan 29 11:29:33.121864 containerd[1543]: time="2025-01-29T11:29:33.121801997Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" Jan 29 11:29:33.121864 containerd[1543]: time="2025-01-29T11:29:33.121837175Z" level=info msg="TearDown network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" successfully" Jan 29 11:29:33.121864 containerd[1543]: time="2025-01-29T11:29:33.121842832Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" returns successfully" Jan 29 11:29:33.121854 systemd[1]: run-netns-cni\x2d67136e1b\x2d8e7e\x2d37b8\x2d4ac2\x2d80503e5acaa2.mount: Deactivated successfully. Jan 29 11:29:33.122644 containerd[1543]: time="2025-01-29T11:29:33.122235752Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:3,}" Jan 29 11:29:33.122689 kubelet[2780]: I0129 11:29:33.122428 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0" Jan 29 11:29:33.123314 containerd[1543]: time="2025-01-29T11:29:33.122846833Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\"" Jan 29 11:29:33.123314 containerd[1543]: time="2025-01-29T11:29:33.122975704Z" level=info msg="Ensure that sandbox 6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0 in task-service has been cleanup successfully" Jan 29 11:29:33.124549 containerd[1543]: time="2025-01-29T11:29:33.124514626Z" level=info msg="TearDown network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" successfully" Jan 29 11:29:33.124549 containerd[1543]: time="2025-01-29T11:29:33.124546359Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" returns successfully" Jan 29 11:29:33.125128 containerd[1543]: time="2025-01-29T11:29:33.125112517Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" Jan 29 11:29:33.125178 containerd[1543]: time="2025-01-29T11:29:33.125166826Z" level=info msg="TearDown network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" successfully" Jan 29 11:29:33.125178 containerd[1543]: time="2025-01-29T11:29:33.125176755Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" returns successfully" Jan 29 11:29:33.125383 systemd[1]: run-netns-cni\x2d05919c2b\x2d06f0\x2d8594\x2d71df\x2dbb00ff16982e.mount: Deactivated successfully. Jan 29 11:29:33.125884 containerd[1543]: time="2025-01-29T11:29:33.125666287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:2,}" Jan 29 11:29:33.126653 kubelet[2780]: I0129 11:29:33.126540 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c" Jan 29 11:29:33.126946 containerd[1543]: time="2025-01-29T11:29:33.126785973Z" level=info msg="StopPodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\"" Jan 29 11:29:33.126946 containerd[1543]: time="2025-01-29T11:29:33.126884193Z" level=info msg="Ensure that sandbox 29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c in task-service has been cleanup successfully" Jan 29 11:29:33.127067 containerd[1543]: time="2025-01-29T11:29:33.127058194Z" level=info msg="TearDown network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" successfully" Jan 29 11:29:33.127105 containerd[1543]: time="2025-01-29T11:29:33.127098404Z" level=info msg="StopPodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" returns successfully" Jan 29 11:29:33.127340 containerd[1543]: time="2025-01-29T11:29:33.127330362Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\"" Jan 29 11:29:33.127448 containerd[1543]: time="2025-01-29T11:29:33.127405399Z" level=info msg="TearDown network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" successfully" Jan 29 11:29:33.127448 containerd[1543]: time="2025-01-29T11:29:33.127413425Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" returns successfully" Jan 29 11:29:33.127874 containerd[1543]: time="2025-01-29T11:29:33.127744599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:2,}" Jan 29 11:29:33.128053 kubelet[2780]: I0129 11:29:33.128039 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c" Jan 29 11:29:33.130632 containerd[1543]: time="2025-01-29T11:29:33.130545544Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\"" Jan 29 11:29:33.185631 containerd[1543]: time="2025-01-29T11:29:33.185589699Z" level=info msg="Ensure that sandbox ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c in task-service has been cleanup successfully" Jan 29 11:29:33.185782 containerd[1543]: time="2025-01-29T11:29:33.185766601Z" level=info msg="TearDown network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" successfully" Jan 29 11:29:33.185782 containerd[1543]: time="2025-01-29T11:29:33.185779311Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" returns successfully" Jan 29 11:29:33.186106 containerd[1543]: time="2025-01-29T11:29:33.186090671Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" Jan 29 11:29:33.186167 containerd[1543]: time="2025-01-29T11:29:33.186154252Z" level=info msg="TearDown network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" successfully" Jan 29 11:29:33.186167 containerd[1543]: time="2025-01-29T11:29:33.186164933Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" returns successfully" Jan 29 11:29:33.198276 containerd[1543]: time="2025-01-29T11:29:33.186512311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:2,}" Jan 29 11:29:33.730980 systemd[1]: run-netns-cni\x2da2978ff2\x2dea26\x2ddb5f\x2d586a\x2d23e5a60cd7ed.mount: Deactivated successfully. Jan 29 11:29:33.731064 systemd[1]: run-netns-cni\x2d9bf1dde2\x2da1c0\x2d057c\x2db2da\x2d0a8c30319d57.mount: Deactivated successfully. Jan 29 11:29:35.250189 containerd[1543]: time="2025-01-29T11:29:35.250151388Z" level=error msg="Failed to destroy network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.252480 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23-shm.mount: Deactivated successfully. Jan 29 11:29:35.255374 containerd[1543]: time="2025-01-29T11:29:35.252555824Z" level=error msg="encountered an error cleaning up failed sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.255374 containerd[1543]: time="2025-01-29T11:29:35.252645815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.258516 kubelet[2780]: E0129 11:29:35.253100 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.258516 kubelet[2780]: E0129 11:29:35.253136 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:35.258516 kubelet[2780]: E0129 11:29:35.253151 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:35.258890 kubelet[2780]: E0129 11:29:35.253183 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z5wzg_kube-system(c5612029-5997-4071-9364-2b02408150e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z5wzg_kube-system(c5612029-5997-4071-9364-2b02408150e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z5wzg" podUID="c5612029-5997-4071-9364-2b02408150e5" Jan 29 11:29:35.393524 containerd[1543]: time="2025-01-29T11:29:35.393482975Z" level=error msg="Failed to destroy network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.393782 containerd[1543]: time="2025-01-29T11:29:35.393725362Z" level=error msg="encountered an error cleaning up failed sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.393782 containerd[1543]: time="2025-01-29T11:29:35.393764035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.393913 kubelet[2780]: E0129 11:29:35.393891 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.394783 kubelet[2780]: E0129 11:29:35.393928 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:35.394783 kubelet[2780]: E0129 11:29:35.393940 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:35.394783 kubelet[2780]: E0129 11:29:35.393968 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" podUID="d14ebb17-ee65-46b9-8ef9-c5441011f365" Jan 29 11:29:35.506466 containerd[1543]: time="2025-01-29T11:29:35.506344682Z" level=error msg="Failed to destroy network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.507867 containerd[1543]: time="2025-01-29T11:29:35.507769393Z" level=error msg="encountered an error cleaning up failed sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.507867 containerd[1543]: time="2025-01-29T11:29:35.507810990Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.508172 kubelet[2780]: E0129 11:29:35.508011 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.508172 kubelet[2780]: E0129 11:29:35.508125 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:35.508172 kubelet[2780]: E0129 11:29:35.508138 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:35.508637 kubelet[2780]: E0129 11:29:35.508270 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v7xft" podUID="d903c164-6164-4123-875b-c2120fb387c7" Jan 29 11:29:35.526247 containerd[1543]: time="2025-01-29T11:29:35.526217949Z" level=error msg="Failed to destroy network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.527646 containerd[1543]: time="2025-01-29T11:29:35.526994394Z" level=error msg="encountered an error cleaning up failed sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.527646 containerd[1543]: time="2025-01-29T11:29:35.527034048Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.527741 kubelet[2780]: E0129 11:29:35.527190 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.527741 kubelet[2780]: E0129 11:29:35.527229 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:35.527741 kubelet[2780]: E0129 11:29:35.527243 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:35.527806 kubelet[2780]: E0129 11:29:35.527272 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bccbcd89f-p65ch_calico-system(a3a7fc11-29e2-44d1-b25b-a10745cff7e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bccbcd89f-p65ch_calico-system(a3a7fc11-29e2-44d1-b25b-a10745cff7e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" podUID="a3a7fc11-29e2-44d1-b25b-a10745cff7e7" Jan 29 11:29:35.554724 containerd[1543]: time="2025-01-29T11:29:35.554550541Z" level=error msg="Failed to destroy network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.555670 containerd[1543]: time="2025-01-29T11:29:35.555386058Z" level=error msg="encountered an error cleaning up failed sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.555670 containerd[1543]: time="2025-01-29T11:29:35.555508716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.555803 kubelet[2780]: E0129 11:29:35.555783 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.568533 kubelet[2780]: E0129 11:29:35.555871 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:35.568533 kubelet[2780]: E0129 11:29:35.555885 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:35.568533 kubelet[2780]: E0129 11:29:35.555913 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:35.568689 containerd[1543]: time="2025-01-29T11:29:35.564273327Z" level=error msg="Failed to destroy network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.568689 containerd[1543]: time="2025-01-29T11:29:35.564640595Z" level=error msg="encountered an error cleaning up failed sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.568689 containerd[1543]: time="2025-01-29T11:29:35.564683712Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:2,} failed, error" error="failed to setup network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.570730 kubelet[2780]: E0129 11:29:35.564791 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:35.570730 kubelet[2780]: E0129 11:29:35.564838 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:35.570730 kubelet[2780]: E0129 11:29:35.564861 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:35.576585 kubelet[2780]: E0129 11:29:35.564889 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" podUID="5c59bfc6-a9c3-4ac8-8013-4f73cd38040c" Jan 29 11:29:36.128426 containerd[1543]: time="2025-01-29T11:29:36.123920082Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:36.129397 containerd[1543]: time="2025-01-29T11:29:36.129274644Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:29:36.169857 containerd[1543]: time="2025-01-29T11:29:36.169814450Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:36.179990 containerd[1543]: time="2025-01-29T11:29:36.179911726Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:36.191946 containerd[1543]: time="2025-01-29T11:29:36.191903516Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.453048394s" Jan 29 11:29:36.191946 containerd[1543]: time="2025-01-29T11:29:36.191944068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:29:36.195028 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca-shm.mount: Deactivated successfully. Jan 29 11:29:36.195087 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832829480.mount: Deactivated successfully. Jan 29 11:29:36.247281 kubelet[2780]: I0129 11:29:36.247228 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23" Jan 29 11:29:36.248443 containerd[1543]: time="2025-01-29T11:29:36.248183820Z" level=info msg="StopPodSandbox for \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\"" Jan 29 11:29:36.249357 kubelet[2780]: I0129 11:29:36.249348 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca" Jan 29 11:29:36.250388 containerd[1543]: time="2025-01-29T11:29:36.250320118Z" level=info msg="StopPodSandbox for \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\"" Jan 29 11:29:36.251299 containerd[1543]: time="2025-01-29T11:29:36.250445709Z" level=info msg="Ensure that sandbox 7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca in task-service has been cleanup successfully" Jan 29 11:29:36.251299 containerd[1543]: time="2025-01-29T11:29:36.250598545Z" level=info msg="TearDown network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\" successfully" Jan 29 11:29:36.251299 containerd[1543]: time="2025-01-29T11:29:36.250607111Z" level=info msg="StopPodSandbox for \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\" returns successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.251973520Z" level=info msg="StopPodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\"" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.252104715Z" level=info msg="TearDown network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.252113128Z" level=info msg="StopPodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" returns successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.252821233Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\"" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.252873066Z" level=info msg="TearDown network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.252881498Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" returns successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.253056136Z" level=info msg="Ensure that sandbox cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23 in task-service has been cleanup successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.253212347Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.253269660Z" level=info msg="TearDown network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.253277588Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" returns successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.253448228Z" level=info msg="TearDown network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\" successfully" Jan 29 11:29:36.253842 containerd[1543]: time="2025-01-29T11:29:36.253459572Z" level=info msg="StopPodSandbox for \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\" returns successfully" Jan 29 11:29:36.252766 systemd[1]: run-netns-cni\x2d05cb8101\x2d33ff\x2dcfe8\x2dc64c\x2db716ef9fb0cb.mount: Deactivated successfully. Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.254958047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.255353404Z" level=info msg="StopPodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\"" Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.255394449Z" level=info msg="TearDown network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" successfully" Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.255400493Z" level=info msg="StopPodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" returns successfully" Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.256195885Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\"" Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.256238420Z" level=info msg="TearDown network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" successfully" Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.256244536Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" returns successfully" Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.256910591Z" level=info msg="StopPodSandbox for \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\"" Jan 29 11:29:36.257654 containerd[1543]: time="2025-01-29T11:29:36.257272186Z" level=info msg="Ensure that sandbox 16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec in task-service has been cleanup successfully" Jan 29 11:29:36.257870 kubelet[2780]: I0129 11:29:36.256345 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec" Jan 29 11:29:36.257898 containerd[1543]: time="2025-01-29T11:29:36.257695555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:3,}" Jan 29 11:29:36.258172 systemd[1]: run-netns-cni\x2d5a086f7e\x2dd239\x2df76f\x2d02ab\x2d6ec3df28d711.mount: Deactivated successfully. Jan 29 11:29:36.258754 containerd[1543]: time="2025-01-29T11:29:36.258737618Z" level=info msg="TearDown network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\" successfully" Jan 29 11:29:36.258754 containerd[1543]: time="2025-01-29T11:29:36.258750710Z" level=info msg="StopPodSandbox for \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\" returns successfully" Jan 29 11:29:36.259814 containerd[1543]: time="2025-01-29T11:29:36.259738680Z" level=info msg="StopPodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\"" Jan 29 11:29:36.259814 containerd[1543]: time="2025-01-29T11:29:36.259791823Z" level=info msg="TearDown network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" successfully" Jan 29 11:29:36.259881 containerd[1543]: time="2025-01-29T11:29:36.259826219Z" level=info msg="StopPodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" returns successfully" Jan 29 11:29:36.264127 containerd[1543]: time="2025-01-29T11:29:36.264105308Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\"" Jan 29 11:29:36.264327 containerd[1543]: time="2025-01-29T11:29:36.264254967Z" level=info msg="TearDown network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" successfully" Jan 29 11:29:36.264327 containerd[1543]: time="2025-01-29T11:29:36.264268464Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" returns successfully" Jan 29 11:29:36.290893 kubelet[2780]: I0129 11:29:36.290872 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc" Jan 29 11:29:36.293151 containerd[1543]: time="2025-01-29T11:29:36.292507407Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" Jan 29 11:29:36.293151 containerd[1543]: time="2025-01-29T11:29:36.292593120Z" level=info msg="StopPodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\"" Jan 29 11:29:36.293151 containerd[1543]: time="2025-01-29T11:29:36.292634599Z" level=info msg="TearDown network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" successfully" Jan 29 11:29:36.293151 containerd[1543]: time="2025-01-29T11:29:36.292647112Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" returns successfully" Jan 29 11:29:36.293151 containerd[1543]: time="2025-01-29T11:29:36.292773015Z" level=info msg="Ensure that sandbox 391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc in task-service has been cleanup successfully" Jan 29 11:29:36.294506 containerd[1543]: time="2025-01-29T11:29:36.293782939Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:4,}" Jan 29 11:29:36.297687 containerd[1543]: time="2025-01-29T11:29:36.297660423Z" level=info msg="TearDown network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" successfully" Jan 29 11:29:36.297687 containerd[1543]: time="2025-01-29T11:29:36.297680190Z" level=info msg="StopPodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" returns successfully" Jan 29 11:29:36.297801 containerd[1543]: time="2025-01-29T11:29:36.297726700Z" level=info msg="CreateContainer within sandbox \"eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:29:36.298443 containerd[1543]: time="2025-01-29T11:29:36.298424417Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\"" Jan 29 11:29:36.299153 containerd[1543]: time="2025-01-29T11:29:36.298495612Z" level=info msg="TearDown network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" successfully" Jan 29 11:29:36.299153 containerd[1543]: time="2025-01-29T11:29:36.298506532Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" returns successfully" Jan 29 11:29:36.299799 containerd[1543]: time="2025-01-29T11:29:36.299668109Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" Jan 29 11:29:36.299799 containerd[1543]: time="2025-01-29T11:29:36.299754118Z" level=info msg="TearDown network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" successfully" Jan 29 11:29:36.299799 containerd[1543]: time="2025-01-29T11:29:36.299764010Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" returns successfully" Jan 29 11:29:36.300590 kubelet[2780]: I0129 11:29:36.300159 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523" Jan 29 11:29:36.302372 containerd[1543]: time="2025-01-29T11:29:36.302237608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:3,}" Jan 29 11:29:36.302789 containerd[1543]: time="2025-01-29T11:29:36.302764251Z" level=info msg="StopPodSandbox for \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\"" Jan 29 11:29:36.303239 containerd[1543]: time="2025-01-29T11:29:36.302978115Z" level=info msg="Ensure that sandbox 2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523 in task-service has been cleanup successfully" Jan 29 11:29:36.304939 containerd[1543]: time="2025-01-29T11:29:36.304908938Z" level=info msg="TearDown network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\" successfully" Jan 29 11:29:36.308640 containerd[1543]: time="2025-01-29T11:29:36.308542798Z" level=info msg="StopPodSandbox for \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\" returns successfully" Jan 29 11:29:36.311718 containerd[1543]: time="2025-01-29T11:29:36.311528100Z" level=info msg="StopPodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\"" Jan 29 11:29:36.311718 containerd[1543]: time="2025-01-29T11:29:36.311581866Z" level=info msg="TearDown network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" successfully" Jan 29 11:29:36.311718 containerd[1543]: time="2025-01-29T11:29:36.311588760Z" level=info msg="StopPodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" returns successfully" Jan 29 11:29:36.311856 kubelet[2780]: I0129 11:29:36.311714 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310" Jan 29 11:29:36.312023 containerd[1543]: time="2025-01-29T11:29:36.311966844Z" level=info msg="StopPodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\"" Jan 29 11:29:36.313375 containerd[1543]: time="2025-01-29T11:29:36.313362503Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\"" Jan 29 11:29:36.313490 containerd[1543]: time="2025-01-29T11:29:36.313480866Z" level=info msg="TearDown network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" successfully" Jan 29 11:29:36.313606 containerd[1543]: time="2025-01-29T11:29:36.313522623Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" returns successfully" Jan 29 11:29:36.314963 containerd[1543]: time="2025-01-29T11:29:36.314946149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:3,}" Jan 29 11:29:36.333991 containerd[1543]: time="2025-01-29T11:29:36.333909704Z" level=error msg="Failed to destroy network for sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.334314 containerd[1543]: time="2025-01-29T11:29:36.334243989Z" level=error msg="encountered an error cleaning up failed sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.334314 containerd[1543]: time="2025-01-29T11:29:36.334289456Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.334527 kubelet[2780]: E0129 11:29:36.334510 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.334625 kubelet[2780]: E0129 11:29:36.334597 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:36.334702 kubelet[2780]: E0129 11:29:36.334662 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-z5wzg" Jan 29 11:29:36.334781 kubelet[2780]: E0129 11:29:36.334748 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-z5wzg_kube-system(c5612029-5997-4071-9364-2b02408150e5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-z5wzg_kube-system(c5612029-5997-4071-9364-2b02408150e5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-z5wzg" podUID="c5612029-5997-4071-9364-2b02408150e5" Jan 29 11:29:36.339249 containerd[1543]: time="2025-01-29T11:29:36.339171270Z" level=error msg="Failed to destroy network for sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.339489 containerd[1543]: time="2025-01-29T11:29:36.339466502Z" level=error msg="encountered an error cleaning up failed sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.339623 containerd[1543]: time="2025-01-29T11:29:36.339562534Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.339721 kubelet[2780]: E0129 11:29:36.339693 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.339781 kubelet[2780]: E0129 11:29:36.339734 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:36.339781 kubelet[2780]: E0129 11:29:36.339749 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" Jan 29 11:29:36.339855 kubelet[2780]: E0129 11:29:36.339779 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-cvj7f_calico-apiserver(d14ebb17-ee65-46b9-8ef9-c5441011f365)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" podUID="d14ebb17-ee65-46b9-8ef9-c5441011f365" Jan 29 11:29:36.340809 containerd[1543]: time="2025-01-29T11:29:36.340554907Z" level=info msg="Ensure that sandbox e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310 in task-service has been cleanup successfully" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.340916693Z" level=info msg="TearDown network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" successfully" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.340926269Z" level=info msg="StopPodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" returns successfully" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.341311507Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\"" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.341377510Z" level=info msg="TearDown network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" successfully" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.341385133Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" returns successfully" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.341546767Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.341591469Z" level=info msg="TearDown network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" successfully" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.341597675Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" returns successfully" Jan 29 11:29:36.343697 containerd[1543]: time="2025-01-29T11:29:36.341842367Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:3,}" Jan 29 11:29:36.423793 containerd[1543]: time="2025-01-29T11:29:36.423723626Z" level=error msg="Failed to destroy network for sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.425798 containerd[1543]: time="2025-01-29T11:29:36.425606531Z" level=error msg="encountered an error cleaning up failed sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.425798 containerd[1543]: time="2025-01-29T11:29:36.425660792Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.426163 kubelet[2780]: E0129 11:29:36.425833 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.426163 kubelet[2780]: E0129 11:29:36.425875 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:36.426163 kubelet[2780]: E0129 11:29:36.425891 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-v7xft" Jan 29 11:29:36.426235 kubelet[2780]: E0129 11:29:36.425928 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-v7xft_kube-system(d903c164-6164-4123-875b-c2120fb387c7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-v7xft" podUID="d903c164-6164-4123-875b-c2120fb387c7" Jan 29 11:29:36.482220 containerd[1543]: time="2025-01-29T11:29:36.482169660Z" level=error msg="Failed to destroy network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.482409 containerd[1543]: time="2025-01-29T11:29:36.482391453Z" level=error msg="encountered an error cleaning up failed sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.482449 containerd[1543]: time="2025-01-29T11:29:36.482438411Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.482622 kubelet[2780]: E0129 11:29:36.482589 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.482676 kubelet[2780]: E0129 11:29:36.482659 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:36.482710 kubelet[2780]: E0129 11:29:36.482680 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:36.482734 kubelet[2780]: E0129 11:29:36.482721 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:36.518902 containerd[1543]: time="2025-01-29T11:29:36.518541034Z" level=info msg="CreateContainer within sandbox \"eb63e7b9a802919a39ce95a96f23a149993cb34bd985f78cea1452ef2a283ab3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"2489e7eedc570faf06550b0aca6010a9aab56417644d4058fb8f13713084337b\"" Jan 29 11:29:36.533631 containerd[1543]: time="2025-01-29T11:29:36.533559685Z" level=info msg="StartContainer for \"2489e7eedc570faf06550b0aca6010a9aab56417644d4058fb8f13713084337b\"" Jan 29 11:29:36.536890 containerd[1543]: time="2025-01-29T11:29:36.536576600Z" level=error msg="Failed to destroy network for sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.537521 containerd[1543]: time="2025-01-29T11:29:36.537395899Z" level=error msg="encountered an error cleaning up failed sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.537521 containerd[1543]: time="2025-01-29T11:29:36.537439909Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.538259 kubelet[2780]: E0129 11:29:36.537686 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.538259 kubelet[2780]: E0129 11:29:36.537719 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:36.538259 kubelet[2780]: E0129 11:29:36.537736 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" Jan 29 11:29:36.538349 kubelet[2780]: E0129 11:29:36.537761 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6bccbcd89f-p65ch_calico-system(a3a7fc11-29e2-44d1-b25b-a10745cff7e7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6bccbcd89f-p65ch_calico-system(a3a7fc11-29e2-44d1-b25b-a10745cff7e7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" podUID="a3a7fc11-29e2-44d1-b25b-a10745cff7e7" Jan 29 11:29:36.552520 containerd[1543]: time="2025-01-29T11:29:36.552440399Z" level=error msg="Failed to destroy network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.552816 containerd[1543]: time="2025-01-29T11:29:36.552733821Z" level=error msg="encountered an error cleaning up failed sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.552816 containerd[1543]: time="2025-01-29T11:29:36.552779434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:3,} failed, error" error="failed to setup network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.553063 kubelet[2780]: E0129 11:29:36.553043 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:36.553655 kubelet[2780]: E0129 11:29:36.553099 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:36.553655 kubelet[2780]: E0129 11:29:36.553114 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:36.553655 kubelet[2780]: E0129 11:29:36.553139 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" podUID="5c59bfc6-a9c3-4ac8-8013-4f73cd38040c" Jan 29 11:29:36.692755 systemd[1]: Started cri-containerd-2489e7eedc570faf06550b0aca6010a9aab56417644d4058fb8f13713084337b.scope - libcontainer container 2489e7eedc570faf06550b0aca6010a9aab56417644d4058fb8f13713084337b. Jan 29 11:29:36.715918 containerd[1543]: time="2025-01-29T11:29:36.715796112Z" level=info msg="StartContainer for \"2489e7eedc570faf06550b0aca6010a9aab56417644d4058fb8f13713084337b\" returns successfully" Jan 29 11:29:37.195496 systemd[1]: run-netns-cni\x2d6efc9afe\x2ddf47\x2d0964\x2d2ef1\x2d6d6a49bcec8c.mount: Deactivated successfully. Jan 29 11:29:37.195555 systemd[1]: run-netns-cni\x2d9505d998\x2dc944\x2dbaaf\x2df087\x2d643bfc80025c.mount: Deactivated successfully. Jan 29 11:29:37.195587 systemd[1]: run-netns-cni\x2df2327c20\x2db445\x2dcfed\x2d30f3\x2d789ea1d9cc83.mount: Deactivated successfully. Jan 29 11:29:37.195624 systemd[1]: run-netns-cni\x2df20ff02c\x2de0a6\x2d376c\x2d522a\x2d6c62bf20b3b2.mount: Deactivated successfully. Jan 29 11:29:37.248687 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:29:37.249230 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:29:37.316309 kubelet[2780]: I0129 11:29:37.316145 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02" Jan 29 11:29:37.317406 containerd[1543]: time="2025-01-29T11:29:37.317202166Z" level=info msg="StopPodSandbox for \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\"" Jan 29 11:29:37.317406 containerd[1543]: time="2025-01-29T11:29:37.317377649Z" level=info msg="Ensure that sandbox ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02 in task-service has been cleanup successfully" Jan 29 11:29:37.318400 containerd[1543]: time="2025-01-29T11:29:37.317847668Z" level=info msg="TearDown network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\" successfully" Jan 29 11:29:37.318400 containerd[1543]: time="2025-01-29T11:29:37.317857189Z" level=info msg="StopPodSandbox for \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\" returns successfully" Jan 29 11:29:37.318400 containerd[1543]: time="2025-01-29T11:29:37.318250412Z" level=info msg="StopPodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\"" Jan 29 11:29:37.318400 containerd[1543]: time="2025-01-29T11:29:37.318293497Z" level=info msg="TearDown network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" successfully" Jan 29 11:29:37.318400 containerd[1543]: time="2025-01-29T11:29:37.318320494Z" level=info msg="StopPodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" returns successfully" Jan 29 11:29:37.319898 systemd[1]: run-netns-cni\x2d144e027a\x2d89ef\x2d4a0c\x2de605\x2d6c7c568cac4a.mount: Deactivated successfully. Jan 29 11:29:37.321011 containerd[1543]: time="2025-01-29T11:29:37.320876490Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\"" Jan 29 11:29:37.321011 containerd[1543]: time="2025-01-29T11:29:37.320935058Z" level=info msg="TearDown network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" successfully" Jan 29 11:29:37.321011 containerd[1543]: time="2025-01-29T11:29:37.320941468Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" returns successfully" Jan 29 11:29:37.321971 containerd[1543]: time="2025-01-29T11:29:37.321943597Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" Jan 29 11:29:37.322278 containerd[1543]: time="2025-01-29T11:29:37.322127523Z" level=info msg="TearDown network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" successfully" Jan 29 11:29:37.322278 containerd[1543]: time="2025-01-29T11:29:37.322135638Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" returns successfully" Jan 29 11:29:37.323844 containerd[1543]: time="2025-01-29T11:29:37.323760393Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:29:37.330083 kubelet[2780]: I0129 11:29:37.330051 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394" Jan 29 11:29:37.334544 containerd[1543]: time="2025-01-29T11:29:37.332099893Z" level=info msg="StopPodSandbox for \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\"" Jan 29 11:29:37.334544 containerd[1543]: time="2025-01-29T11:29:37.332260021Z" level=info msg="Ensure that sandbox b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394 in task-service has been cleanup successfully" Jan 29 11:29:37.334544 containerd[1543]: time="2025-01-29T11:29:37.333773291Z" level=info msg="TearDown network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\" successfully" Jan 29 11:29:37.334420 systemd[1]: run-netns-cni\x2d34930124\x2d3221\x2d3041\x2d5a50\x2d7e5010511708.mount: Deactivated successfully. Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.333787811Z" level=info msg="StopPodSandbox for \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\" returns successfully" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.335424215Z" level=info msg="StopPodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\"" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.335474100Z" level=info msg="TearDown network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" successfully" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.335480112Z" level=info msg="StopPodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" returns successfully" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.335783922Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\"" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.335827661Z" level=info msg="TearDown network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" successfully" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.335833871Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" returns successfully" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.336452060Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.336493576Z" level=info msg="TearDown network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" successfully" Jan 29 11:29:37.336940 containerd[1543]: time="2025-01-29T11:29:37.336499688Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" returns successfully" Jan 29 11:29:37.338704 containerd[1543]: time="2025-01-29T11:29:37.338428212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:4,}" Jan 29 11:29:37.342207 kubelet[2780]: I0129 11:29:37.338970 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc" Jan 29 11:29:37.341411 systemd[1]: run-netns-cni\x2dc7f7331f\x2ddc25\x2d3ad2\x2dfac9\x2de6f70b3c6026.mount: Deactivated successfully. Jan 29 11:29:37.342314 containerd[1543]: time="2025-01-29T11:29:37.339204337Z" level=info msg="StopPodSandbox for \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\"" Jan 29 11:29:37.342314 containerd[1543]: time="2025-01-29T11:29:37.339307690Z" level=info msg="Ensure that sandbox 8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc in task-service has been cleanup successfully" Jan 29 11:29:37.343843 containerd[1543]: time="2025-01-29T11:29:37.342958353Z" level=info msg="TearDown network for sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\" successfully" Jan 29 11:29:37.343843 containerd[1543]: time="2025-01-29T11:29:37.342972367Z" level=info msg="StopPodSandbox for \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\" returns successfully" Jan 29 11:29:37.345908 containerd[1543]: time="2025-01-29T11:29:37.345834159Z" level=info msg="StopPodSandbox for \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\"" Jan 29 11:29:37.345965 containerd[1543]: time="2025-01-29T11:29:37.345908891Z" level=info msg="TearDown network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\" successfully" Jan 29 11:29:37.345965 containerd[1543]: time="2025-01-29T11:29:37.345916905Z" level=info msg="StopPodSandbox for \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\" returns successfully" Jan 29 11:29:37.348827 kubelet[2780]: I0129 11:29:37.348529 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42" Jan 29 11:29:37.349636 containerd[1543]: time="2025-01-29T11:29:37.349180603Z" level=info msg="StopPodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\"" Jan 29 11:29:37.349636 containerd[1543]: time="2025-01-29T11:29:37.349251287Z" level=info msg="TearDown network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" successfully" Jan 29 11:29:37.349636 containerd[1543]: time="2025-01-29T11:29:37.349262454Z" level=info msg="StopPodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" returns successfully" Jan 29 11:29:37.349636 containerd[1543]: time="2025-01-29T11:29:37.349394864Z" level=info msg="StopPodSandbox for \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\"" Jan 29 11:29:37.349636 containerd[1543]: time="2025-01-29T11:29:37.349521530Z" level=info msg="Ensure that sandbox 9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42 in task-service has been cleanup successfully" Jan 29 11:29:37.351944 containerd[1543]: time="2025-01-29T11:29:37.350611693Z" level=info msg="TearDown network for sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\" successfully" Jan 29 11:29:37.351944 containerd[1543]: time="2025-01-29T11:29:37.350861924Z" level=info msg="StopPodSandbox for \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\" returns successfully" Jan 29 11:29:37.351944 containerd[1543]: time="2025-01-29T11:29:37.351423317Z" level=info msg="StopPodSandbox for \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\"" Jan 29 11:29:37.351944 containerd[1543]: time="2025-01-29T11:29:37.351538019Z" level=info msg="TearDown network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\" successfully" Jan 29 11:29:37.351944 containerd[1543]: time="2025-01-29T11:29:37.351545343Z" level=info msg="StopPodSandbox for \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\" returns successfully" Jan 29 11:29:37.356810 containerd[1543]: time="2025-01-29T11:29:37.352458163Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\"" Jan 29 11:29:37.356810 containerd[1543]: time="2025-01-29T11:29:37.352518873Z" level=info msg="TearDown network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" successfully" Jan 29 11:29:37.356810 containerd[1543]: time="2025-01-29T11:29:37.352525869Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" returns successfully" Jan 29 11:29:37.359881 containerd[1543]: time="2025-01-29T11:29:37.358538716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:4,}" Jan 29 11:29:37.360288 containerd[1543]: time="2025-01-29T11:29:37.360272616Z" level=info msg="StopPodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\"" Jan 29 11:29:37.360348 containerd[1543]: time="2025-01-29T11:29:37.360336178Z" level=info msg="TearDown network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" successfully" Jan 29 11:29:37.360348 containerd[1543]: time="2025-01-29T11:29:37.360346958Z" level=info msg="StopPodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" returns successfully" Jan 29 11:29:37.363491 containerd[1543]: time="2025-01-29T11:29:37.363472905Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\"" Jan 29 11:29:37.363550 containerd[1543]: time="2025-01-29T11:29:37.363530948Z" level=info msg="TearDown network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" successfully" Jan 29 11:29:37.363550 containerd[1543]: time="2025-01-29T11:29:37.363538387Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" returns successfully" Jan 29 11:29:37.365661 containerd[1543]: time="2025-01-29T11:29:37.364564003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:4,}" Jan 29 11:29:37.371167 kubelet[2780]: I0129 11:29:37.370682 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea" Jan 29 11:29:37.378598 containerd[1543]: time="2025-01-29T11:29:37.378572517Z" level=info msg="StopPodSandbox for \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\"" Jan 29 11:29:37.378742 containerd[1543]: time="2025-01-29T11:29:37.378727407Z" level=info msg="Ensure that sandbox 27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea in task-service has been cleanup successfully" Jan 29 11:29:37.378891 containerd[1543]: time="2025-01-29T11:29:37.378873791Z" level=info msg="TearDown network for sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\" successfully" Jan 29 11:29:37.378891 containerd[1543]: time="2025-01-29T11:29:37.378887321Z" level=info msg="StopPodSandbox for \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\" returns successfully" Jan 29 11:29:37.386171 containerd[1543]: time="2025-01-29T11:29:37.386017713Z" level=info msg="StopPodSandbox for \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\"" Jan 29 11:29:37.386476 containerd[1543]: time="2025-01-29T11:29:37.386331918Z" level=info msg="TearDown network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\" successfully" Jan 29 11:29:37.386476 containerd[1543]: time="2025-01-29T11:29:37.386343200Z" level=info msg="StopPodSandbox for \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\" returns successfully" Jan 29 11:29:37.395308 containerd[1543]: time="2025-01-29T11:29:37.395250114Z" level=info msg="StopPodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\"" Jan 29 11:29:37.400341 containerd[1543]: time="2025-01-29T11:29:37.399729965Z" level=info msg="TearDown network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" successfully" Jan 29 11:29:37.400341 containerd[1543]: time="2025-01-29T11:29:37.399751811Z" level=info msg="StopPodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" returns successfully" Jan 29 11:29:37.400488 containerd[1543]: time="2025-01-29T11:29:37.400450048Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\"" Jan 29 11:29:37.400557 containerd[1543]: time="2025-01-29T11:29:37.400539995Z" level=info msg="TearDown network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" successfully" Jan 29 11:29:37.400557 containerd[1543]: time="2025-01-29T11:29:37.400553470Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" returns successfully" Jan 29 11:29:37.402634 containerd[1543]: time="2025-01-29T11:29:37.401004568Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" Jan 29 11:29:37.402634 containerd[1543]: time="2025-01-29T11:29:37.401083521Z" level=info msg="TearDown network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" successfully" Jan 29 11:29:37.402634 containerd[1543]: time="2025-01-29T11:29:37.401095706Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" returns successfully" Jan 29 11:29:37.402634 containerd[1543]: time="2025-01-29T11:29:37.401785853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:5,}" Jan 29 11:29:37.402634 containerd[1543]: time="2025-01-29T11:29:37.402549199Z" level=info msg="StopPodSandbox for \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\"" Jan 29 11:29:37.402791 kubelet[2780]: I0129 11:29:37.402103 2780 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110" Jan 29 11:29:37.402827 containerd[1543]: time="2025-01-29T11:29:37.402701014Z" level=info msg="Ensure that sandbox 805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110 in task-service has been cleanup successfully" Jan 29 11:29:37.403012 containerd[1543]: time="2025-01-29T11:29:37.402985390Z" level=info msg="TearDown network for sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\" successfully" Jan 29 11:29:37.403012 containerd[1543]: time="2025-01-29T11:29:37.403008106Z" level=info msg="StopPodSandbox for \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\" returns successfully" Jan 29 11:29:37.403515 containerd[1543]: time="2025-01-29T11:29:37.403353610Z" level=info msg="StopPodSandbox for \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\"" Jan 29 11:29:37.403515 containerd[1543]: time="2025-01-29T11:29:37.403409462Z" level=info msg="TearDown network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\" successfully" Jan 29 11:29:37.403515 containerd[1543]: time="2025-01-29T11:29:37.403429563Z" level=info msg="StopPodSandbox for \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\" returns successfully" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.403735359Z" level=info msg="StopPodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\"" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.403777151Z" level=info msg="TearDown network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" successfully" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.403783684Z" level=info msg="StopPodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" returns successfully" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.404031372Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\"" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.404093512Z" level=info msg="TearDown network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" successfully" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.404103478Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" returns successfully" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.404342343Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.404403635Z" level=info msg="TearDown network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" successfully" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.404413870Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" returns successfully" Jan 29 11:29:37.405925 containerd[1543]: time="2025-01-29T11:29:37.404701944Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:5,}" Jan 29 11:29:37.448174 kubelet[2780]: I0129 11:29:37.439832 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qkt58" podStartSLOduration=2.931964425 podStartE2EDuration="25.416245298s" podCreationTimestamp="2025-01-29 11:29:12 +0000 UTC" firstStartedPulling="2025-01-29 11:29:13.71967422 +0000 UTC m=+10.505663507" lastFinishedPulling="2025-01-29 11:29:36.203955093 +0000 UTC m=+32.989944380" observedRunningTime="2025-01-29 11:29:37.396563909 +0000 UTC m=+34.182553205" watchObservedRunningTime="2025-01-29 11:29:37.416245298 +0000 UTC m=+34.202234590" Jan 29 11:29:38.197126 systemd[1]: run-netns-cni\x2d8e1be06f\x2d4973\x2db655\x2d959d\x2d4a8582e0b498.mount: Deactivated successfully. Jan 29 11:29:38.197323 systemd[1]: run-netns-cni\x2de5a21ccd\x2d5744\x2db1e4\x2d137f\x2d4666d34764d6.mount: Deactivated successfully. Jan 29 11:29:38.197359 systemd[1]: run-netns-cni\x2d6fbb6f3b\x2d9647\x2d7e5a\x2d43c0\x2da241168f3e7f.mount: Deactivated successfully. Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:37.607 [INFO][4427] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:37.607 [INFO][4427] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" iface="eth0" netns="/var/run/netns/cni-48c31ea0-762e-db18-7ee4-000efa120f28" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:37.608 [INFO][4427] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" iface="eth0" netns="/var/run/netns/cni-48c31ea0-762e-db18-7ee4-000efa120f28" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:37.610 [INFO][4427] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" iface="eth0" netns="/var/run/netns/cni-48c31ea0-762e-db18-7ee4-000efa120f28" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:37.610 [INFO][4427] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:37.611 [INFO][4427] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:38.749 [INFO][4447] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" HandleID="k8s-pod-network.0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" Workload="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:38.751 [INFO][4447] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:38.752 [INFO][4447] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:38.765 [WARNING][4447] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" HandleID="k8s-pod-network.0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" Workload="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:38.765 [INFO][4447] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" HandleID="k8s-pod-network.0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" Workload="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:38.766 [INFO][4447] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:29:38.769638 containerd[1543]: 2025-01-29 11:29:38.768 [INFO][4427] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0" Jan 29 11:29:38.772848 systemd[1]: run-netns-cni\x2d48c31ea0\x2d762e\x2ddb18\x2d7ee4\x2d000efa120f28.mount: Deactivated successfully. Jan 29 11:29:38.777422 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0-shm.mount: Deactivated successfully. Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:37.618 [INFO][4414] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:37.619 [INFO][4414] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" iface="eth0" netns="/var/run/netns/cni-cf8460c2-afed-9426-ac83-874ca511111f" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:37.620 [INFO][4414] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" iface="eth0" netns="/var/run/netns/cni-cf8460c2-afed-9426-ac83-874ca511111f" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:37.620 [INFO][4414] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" iface="eth0" netns="/var/run/netns/cni-cf8460c2-afed-9426-ac83-874ca511111f" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:37.620 [INFO][4414] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:37.620 [INFO][4414] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:38.749 [INFO][4448] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" HandleID="k8s-pod-network.fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:38.752 [INFO][4448] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:38.766 [INFO][4448] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:38.779 [WARNING][4448] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" HandleID="k8s-pod-network.fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:38.779 [INFO][4448] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" HandleID="k8s-pod-network.fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:38.780 [INFO][4448] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:29:38.786066 containerd[1543]: 2025-01-29 11:29:38.784 [INFO][4414] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92" Jan 29 11:29:38.788370 systemd[1]: run-netns-cni\x2dcf8460c2\x2dafed\x2d9426\x2dac83\x2d874ca511111f.mount: Deactivated successfully. Jan 29 11:29:38.819145 containerd[1543]: time="2025-01-29T11:29:38.819050081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:4,} failed, error" error="failed to setup network for sandbox \"fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:38.819352 kubelet[2780]: E0129 11:29:38.819329 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:38.819938 kubelet[2780]: E0129 11:29:38.819520 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:38.819938 kubelet[2780]: E0129 11:29:38.819538 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" Jan 29 11:29:38.819938 kubelet[2780]: E0129 11:29:38.819706 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-f8fc55c6c-8vp99_calico-apiserver(5c59bfc6-a9c3-4ac8-8013-4f73cd38040c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" podUID="5c59bfc6-a9c3-4ac8-8013-4f73cd38040c" Jan 29 11:29:38.821398 containerd[1543]: time="2025-01-29T11:29:38.820920547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:38.821909 kubelet[2780]: E0129 11:29:38.821129 2780 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:29:38.821909 kubelet[2780]: E0129 11:29:38.821328 2780 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:38.821909 kubelet[2780]: E0129 11:29:38.821348 2780 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-v9zp5" Jan 29 11:29:38.821996 kubelet[2780]: E0129 11:29:38.821493 2780 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-v9zp5_calico-system(ad861abe-5500-44ec-ad63-2a1f0ef1f899)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0114e37d930883b2a628a02403af93554a307b3023a0d617ef48e586ed76fee0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-v9zp5" podUID="ad861abe-5500-44ec-ad63-2a1f0ef1f899" Jan 29 11:29:38.825427 systemd-networkd[1242]: cali18d308c66cd: Link UP Jan 29 11:29:38.825531 systemd-networkd[1242]: cali18d308c66cd: Gained carrier Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:37.552 [INFO][4389] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:37.596 [INFO][4389] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0 calico-apiserver-f8fc55c6c- calico-apiserver d14ebb17-ee65-46b9-8ef9-c5441011f365 676 0 2025-01-29 11:29:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f8fc55c6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f8fc55c6c-cvj7f eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali18d308c66cd [] []}} ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-cvj7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:37.596 [INFO][4389] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-cvj7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.749 [INFO][4459] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" HandleID="k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.772 [INFO][4459] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" HandleID="k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c3050), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f8fc55c6c-cvj7f", "timestamp":"2025-01-29 11:29:38.749494744 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.772 [INFO][4459] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.780 [INFO][4459] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.780 [INFO][4459] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.783 [INFO][4459] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.791 [INFO][4459] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.795 [INFO][4459] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.796 [INFO][4459] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.797 [INFO][4459] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.798 [INFO][4459] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.799 [INFO][4459] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116 Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.801 [INFO][4459] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.806 [INFO][4459] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.129/26] block=192.168.88.128/26 handle="k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.806 [INFO][4459] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.129/26] handle="k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" host="localhost" Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.806 [INFO][4459] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:29:38.831453 containerd[1543]: 2025-01-29 11:29:38.806 [INFO][4459] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.129/26] IPv6=[] ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" HandleID="k8s-pod-network.e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" Jan 29 11:29:38.833295 containerd[1543]: 2025-01-29 11:29:38.807 [INFO][4389] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-cvj7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0", GenerateName:"calico-apiserver-f8fc55c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"d14ebb17-ee65-46b9-8ef9-c5441011f365", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8fc55c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f8fc55c6c-cvj7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18d308c66cd", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:38.833295 containerd[1543]: 2025-01-29 11:29:38.808 [INFO][4389] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.129/32] ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-cvj7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" Jan 29 11:29:38.833295 containerd[1543]: 2025-01-29 11:29:38.808 [INFO][4389] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18d308c66cd ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-cvj7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" Jan 29 11:29:38.833295 containerd[1543]: 2025-01-29 11:29:38.820 [INFO][4389] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-cvj7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" Jan 29 11:29:38.833295 containerd[1543]: 2025-01-29 11:29:38.820 [INFO][4389] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-cvj7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0", GenerateName:"calico-apiserver-f8fc55c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"d14ebb17-ee65-46b9-8ef9-c5441011f365", ResourceVersion:"676", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8fc55c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116", Pod:"calico-apiserver-f8fc55c6c-cvj7f", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali18d308c66cd", MAC:"3e:35:7a:4c:6b:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:38.833295 containerd[1543]: 2025-01-29 11:29:38.829 [INFO][4389] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-cvj7f" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--cvj7f-eth0" Jan 29 11:29:38.865885 containerd[1543]: time="2025-01-29T11:29:38.865826045Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:38.865996 containerd[1543]: time="2025-01-29T11:29:38.865867075Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:38.866047 containerd[1543]: time="2025-01-29T11:29:38.865988048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:38.866131 containerd[1543]: time="2025-01-29T11:29:38.866111390Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:38.880831 systemd[1]: Started cri-containerd-e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116.scope - libcontainer container e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116. Jan 29 11:29:38.891920 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:29:38.922848 systemd-networkd[1242]: cali36dfd8aadb7: Link UP Jan 29 11:29:38.926117 systemd-networkd[1242]: cali36dfd8aadb7: Gained carrier Jan 29 11:29:38.932592 containerd[1543]: time="2025-01-29T11:29:38.932565943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-cvj7f,Uid:d14ebb17-ee65-46b9-8ef9-c5441011f365,Namespace:calico-apiserver,Attempt:5,} returns sandbox id \"e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116\"" Jan 29 11:29:38.935491 containerd[1543]: time="2025-01-29T11:29:38.935305593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:37.556 [INFO][4385] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:37.593 [INFO][4385] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--v7xft-eth0 coredns-668d6bf9bc- kube-system d903c164-6164-4123-875b-c2120fb387c7 673 0 2025-01-29 11:29:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-v7xft eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali36dfd8aadb7 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7xft" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7xft-" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:37.593 [INFO][4385] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7xft" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.749 [INFO][4458] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" HandleID="k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Workload="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.773 [INFO][4458] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" HandleID="k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Workload="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000359d20), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-v7xft", "timestamp":"2025-01-29 11:29:38.749600418 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.773 [INFO][4458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.806 [INFO][4458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.806 [INFO][4458] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.884 [INFO][4458] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.888 [INFO][4458] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.897 [INFO][4458] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.899 [INFO][4458] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.901 [INFO][4458] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.901 [INFO][4458] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.903 [INFO][4458] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80 Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.906 [INFO][4458] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.912 [INFO][4458] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.130/26] block=192.168.88.128/26 handle="k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.913 [INFO][4458] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.130/26] handle="k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" host="localhost" Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.913 [INFO][4458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:29:38.944535 containerd[1543]: 2025-01-29 11:29:38.913 [INFO][4458] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.130/26] IPv6=[] ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" HandleID="k8s-pod-network.801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Workload="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" Jan 29 11:29:38.948518 containerd[1543]: 2025-01-29 11:29:38.917 [INFO][4385] cni-plugin/k8s.go 386: Populated endpoint ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7xft" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v7xft-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d903c164-6164-4123-875b-c2120fb387c7", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-v7xft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36dfd8aadb7", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:38.948518 containerd[1543]: 2025-01-29 11:29:38.917 [INFO][4385] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.130/32] ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7xft" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" Jan 29 11:29:38.948518 containerd[1543]: 2025-01-29 11:29:38.917 [INFO][4385] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali36dfd8aadb7 ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7xft" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" Jan 29 11:29:38.948518 containerd[1543]: 2025-01-29 11:29:38.927 [INFO][4385] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7xft" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" Jan 29 11:29:38.948518 containerd[1543]: 2025-01-29 11:29:38.927 [INFO][4385] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7xft" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--v7xft-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d903c164-6164-4123-875b-c2120fb387c7", ResourceVersion:"673", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80", Pod:"coredns-668d6bf9bc-v7xft", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali36dfd8aadb7", MAC:"aa:07:60:5e:da:ba", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:38.948518 containerd[1543]: 2025-01-29 11:29:38.942 [INFO][4385] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80" Namespace="kube-system" Pod="coredns-668d6bf9bc-v7xft" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--v7xft-eth0" Jan 29 11:29:38.962605 containerd[1543]: time="2025-01-29T11:29:38.962508041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:38.962605 containerd[1543]: time="2025-01-29T11:29:38.962570938Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:38.962605 containerd[1543]: time="2025-01-29T11:29:38.962605895Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:38.962898 containerd[1543]: time="2025-01-29T11:29:38.962694626Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:38.978887 systemd[1]: Started cri-containerd-801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80.scope - libcontainer container 801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80. Jan 29 11:29:38.996608 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:29:39.018968 systemd-networkd[1242]: cali6aad0f32d47: Link UP Jan 29 11:29:39.019709 systemd-networkd[1242]: cali6aad0f32d47: Gained carrier Jan 29 11:29:39.031986 containerd[1543]: time="2025-01-29T11:29:39.031834348Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-v7xft,Uid:d903c164-6164-4123-875b-c2120fb387c7,Namespace:kube-system,Attempt:5,} returns sandbox id \"801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80\"" Jan 29 11:29:39.037030 containerd[1543]: time="2025-01-29T11:29:39.036929224Z" level=info msg="CreateContainer within sandbox \"801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:37.549 [INFO][4365] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:37.593 [INFO][4365] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0 coredns-668d6bf9bc- kube-system c5612029-5997-4071-9364-2b02408150e5 674 0 2025-01-29 11:29:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s localhost coredns-668d6bf9bc-z5wzg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6aad0f32d47 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] []}} ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5wzg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z5wzg-" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:37.594 [INFO][4365] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5wzg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.749 [INFO][4457] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" HandleID="k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Workload="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.778 [INFO][4457] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" HandleID="k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Workload="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002400d0), Attrs:map[string]string{"namespace":"kube-system", "node":"localhost", "pod":"coredns-668d6bf9bc-z5wzg", "timestamp":"2025-01-29 11:29:38.749558834 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.779 [INFO][4457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.913 [INFO][4457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.913 [INFO][4457] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.983 [INFO][4457] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.990 [INFO][4457] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.997 [INFO][4457] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:38.998 [INFO][4457] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:39.000 [INFO][4457] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:39.001 [INFO][4457] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:39.002 [INFO][4457] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:39.006 [INFO][4457] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:39.012 [INFO][4457] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.131/26] block=192.168.88.128/26 handle="k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:39.012 [INFO][4457] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.131/26] handle="k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" host="localhost" Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:39.012 [INFO][4457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:29:39.038765 containerd[1543]: 2025-01-29 11:29:39.012 [INFO][4457] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.131/26] IPv6=[] ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" HandleID="k8s-pod-network.099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Workload="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" Jan 29 11:29:39.039437 containerd[1543]: 2025-01-29 11:29:39.015 [INFO][4365] cni-plugin/k8s.go 386: Populated endpoint ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5wzg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c5612029-5997-4071-9364-2b02408150e5", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"coredns-668d6bf9bc-z5wzg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6aad0f32d47", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:39.039437 containerd[1543]: 2025-01-29 11:29:39.015 [INFO][4365] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.131/32] ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5wzg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" Jan 29 11:29:39.039437 containerd[1543]: 2025-01-29 11:29:39.015 [INFO][4365] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6aad0f32d47 ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5wzg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" Jan 29 11:29:39.039437 containerd[1543]: 2025-01-29 11:29:39.019 [INFO][4365] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5wzg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" Jan 29 11:29:39.039437 containerd[1543]: 2025-01-29 11:29:39.020 [INFO][4365] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5wzg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"c5612029-5997-4071-9364-2b02408150e5", ResourceVersion:"674", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f", Pod:"coredns-668d6bf9bc-z5wzg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.88.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6aad0f32d47", MAC:"fe:c0:53:fa:33:57", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:39.039437 containerd[1543]: 2025-01-29 11:29:39.034 [INFO][4365] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f" Namespace="kube-system" Pod="coredns-668d6bf9bc-z5wzg" WorkloadEndpoint="localhost-k8s-coredns--668d6bf9bc--z5wzg-eth0" Jan 29 11:29:39.057204 containerd[1543]: time="2025-01-29T11:29:39.055841836Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:39.057204 containerd[1543]: time="2025-01-29T11:29:39.055891372Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:39.057204 containerd[1543]: time="2025-01-29T11:29:39.055901265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:39.057204 containerd[1543]: time="2025-01-29T11:29:39.056360155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:39.072198 containerd[1543]: time="2025-01-29T11:29:39.072159971Z" level=info msg="CreateContainer within sandbox \"801a159309c5feab668a545ec59676ae61e19761627d0c084ea9c36378c7ee80\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4277fd38f7559d696926453637e6e4eae073601a57acd87daef5056933b829a2\"" Jan 29 11:29:39.072794 containerd[1543]: time="2025-01-29T11:29:39.072650079Z" level=info msg="StartContainer for \"4277fd38f7559d696926453637e6e4eae073601a57acd87daef5056933b829a2\"" Jan 29 11:29:39.074923 systemd[1]: Started cri-containerd-099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f.scope - libcontainer container 099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f. Jan 29 11:29:39.092815 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:29:39.097830 systemd[1]: Started cri-containerd-4277fd38f7559d696926453637e6e4eae073601a57acd87daef5056933b829a2.scope - libcontainer container 4277fd38f7559d696926453637e6e4eae073601a57acd87daef5056933b829a2. Jan 29 11:29:39.120437 containerd[1543]: time="2025-01-29T11:29:39.120413850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-z5wzg,Uid:c5612029-5997-4071-9364-2b02408150e5,Namespace:kube-system,Attempt:4,} returns sandbox id \"099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f\"" Jan 29 11:29:39.123727 containerd[1543]: time="2025-01-29T11:29:39.123640684Z" level=info msg="CreateContainer within sandbox \"099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 29 11:29:39.134763 systemd-networkd[1242]: cali146840157b2: Link UP Jan 29 11:29:39.134901 systemd-networkd[1242]: cali146840157b2: Gained carrier Jan 29 11:29:39.145383 containerd[1543]: time="2025-01-29T11:29:39.140680557Z" level=info msg="CreateContainer within sandbox \"099c7a0def61cfcf15770a2c6969edb70f7941c906edaf685db88648a5b8498f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7e746243f5fed33de609082a689429f272d3ee52f171e88136b599da9e69baeb\"" Jan 29 11:29:39.145598 containerd[1543]: time="2025-01-29T11:29:39.145582913Z" level=info msg="StartContainer for \"7e746243f5fed33de609082a689429f272d3ee52f171e88136b599da9e69baeb\"" Jan 29 11:29:39.167966 containerd[1543]: time="2025-01-29T11:29:39.167842763Z" level=info msg="StartContainer for \"4277fd38f7559d696926453637e6e4eae073601a57acd87daef5056933b829a2\" returns successfully" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:37.530 [INFO][4348] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:37.593 [INFO][4348] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0 calico-kube-controllers-6bccbcd89f- calico-system a3a7fc11-29e2-44d1-b25b-a10745cff7e7 675 0 2025-01-29 11:29:13 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6bccbcd89f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s localhost calico-kube-controllers-6bccbcd89f-p65ch eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali146840157b2 [] []}} ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Namespace="calico-system" Pod="calico-kube-controllers-6bccbcd89f-p65ch" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:37.593 [INFO][4348] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Namespace="calico-system" Pod="calico-kube-controllers-6bccbcd89f-p65ch" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:38.749 [INFO][4460] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" HandleID="k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Workload="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:38.778 [INFO][4460] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" HandleID="k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Workload="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002437b0), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"calico-kube-controllers-6bccbcd89f-p65ch", "timestamp":"2025-01-29 11:29:38.749556854 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:38.778 [INFO][4460] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.012 [INFO][4460] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.012 [INFO][4460] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.084 [INFO][4460] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.091 [INFO][4460] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.099 [INFO][4460] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.103 [INFO][4460] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.107 [INFO][4460] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.107 [INFO][4460] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.110 [INFO][4460] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.117 [INFO][4460] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.127 [INFO][4460] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.132/26] block=192.168.88.128/26 handle="k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.127 [INFO][4460] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.132/26] handle="k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" host="localhost" Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.127 [INFO][4460] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:29:39.169800 containerd[1543]: 2025-01-29 11:29:39.127 [INFO][4460] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.132/26] IPv6=[] ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" HandleID="k8s-pod-network.11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Workload="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" Jan 29 11:29:39.170218 containerd[1543]: 2025-01-29 11:29:39.131 [INFO][4348] cni-plugin/k8s.go 386: Populated endpoint ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Namespace="calico-system" Pod="calico-kube-controllers-6bccbcd89f-p65ch" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0", GenerateName:"calico-kube-controllers-6bccbcd89f-", Namespace:"calico-system", SelfLink:"", UID:"a3a7fc11-29e2-44d1-b25b-a10745cff7e7", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bccbcd89f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-kube-controllers-6bccbcd89f-p65ch", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali146840157b2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:39.170218 containerd[1543]: 2025-01-29 11:29:39.131 [INFO][4348] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.132/32] ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Namespace="calico-system" Pod="calico-kube-controllers-6bccbcd89f-p65ch" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" Jan 29 11:29:39.170218 containerd[1543]: 2025-01-29 11:29:39.131 [INFO][4348] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali146840157b2 ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Namespace="calico-system" Pod="calico-kube-controllers-6bccbcd89f-p65ch" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" Jan 29 11:29:39.170218 containerd[1543]: 2025-01-29 11:29:39.135 [INFO][4348] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Namespace="calico-system" Pod="calico-kube-controllers-6bccbcd89f-p65ch" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" Jan 29 11:29:39.170218 containerd[1543]: 2025-01-29 11:29:39.137 [INFO][4348] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Namespace="calico-system" Pod="calico-kube-controllers-6bccbcd89f-p65ch" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0", GenerateName:"calico-kube-controllers-6bccbcd89f-", Namespace:"calico-system", SelfLink:"", UID:"a3a7fc11-29e2-44d1-b25b-a10745cff7e7", ResourceVersion:"675", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6bccbcd89f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d", Pod:"calico-kube-controllers-6bccbcd89f-p65ch", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.88.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali146840157b2", MAC:"d2:4b:8f:19:51:68", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:39.170218 containerd[1543]: 2025-01-29 11:29:39.168 [INFO][4348] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d" Namespace="calico-system" Pod="calico-kube-controllers-6bccbcd89f-p65ch" WorkloadEndpoint="localhost-k8s-calico--kube--controllers--6bccbcd89f--p65ch-eth0" Jan 29 11:29:39.188932 systemd[1]: Started cri-containerd-7e746243f5fed33de609082a689429f272d3ee52f171e88136b599da9e69baeb.scope - libcontainer container 7e746243f5fed33de609082a689429f272d3ee52f171e88136b599da9e69baeb. Jan 29 11:29:39.201378 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fb0cdc7218cbba704d3e8e130c7796ca857271d48f3acb35527a8a82fbc5cf92-shm.mount: Deactivated successfully. Jan 29 11:29:39.234345 containerd[1543]: time="2025-01-29T11:29:39.234315360Z" level=info msg="StartContainer for \"7e746243f5fed33de609082a689429f272d3ee52f171e88136b599da9e69baeb\" returns successfully" Jan 29 11:29:39.252248 containerd[1543]: time="2025-01-29T11:29:39.251704837Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:39.252407 containerd[1543]: time="2025-01-29T11:29:39.252280126Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:39.252521 containerd[1543]: time="2025-01-29T11:29:39.252415531Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:39.253688 containerd[1543]: time="2025-01-29T11:29:39.252822601Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:39.275788 systemd[1]: Started cri-containerd-11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d.scope - libcontainer container 11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d. Jan 29 11:29:39.295346 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:29:39.331667 containerd[1543]: time="2025-01-29T11:29:39.331637287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6bccbcd89f-p65ch,Uid:a3a7fc11-29e2-44d1-b25b-a10745cff7e7,Namespace:calico-system,Attempt:4,} returns sandbox id \"11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d\"" Jan 29 11:29:39.412750 containerd[1543]: time="2025-01-29T11:29:39.412722781Z" level=info msg="StopPodSandbox for \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\"" Jan 29 11:29:39.412842 containerd[1543]: time="2025-01-29T11:29:39.412780278Z" level=info msg="TearDown network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\" successfully" Jan 29 11:29:39.412842 containerd[1543]: time="2025-01-29T11:29:39.412806372Z" level=info msg="StopPodSandbox for \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\" returns successfully" Jan 29 11:29:39.412880 containerd[1543]: time="2025-01-29T11:29:39.412856236Z" level=info msg="StopPodSandbox for \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\"" Jan 29 11:29:39.412896 containerd[1543]: time="2025-01-29T11:29:39.412890845Z" level=info msg="TearDown network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\" successfully" Jan 29 11:29:39.412914 containerd[1543]: time="2025-01-29T11:29:39.412896253Z" level=info msg="StopPodSandbox for \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\" returns successfully" Jan 29 11:29:39.413053 containerd[1543]: time="2025-01-29T11:29:39.413038760Z" level=info msg="StopPodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\"" Jan 29 11:29:39.413127 containerd[1543]: time="2025-01-29T11:29:39.413113802Z" level=info msg="StopPodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\"" Jan 29 11:29:39.413177 containerd[1543]: time="2025-01-29T11:29:39.413164643Z" level=info msg="TearDown network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" successfully" Jan 29 11:29:39.413177 containerd[1543]: time="2025-01-29T11:29:39.413174749Z" level=info msg="StopPodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" returns successfully" Jan 29 11:29:39.413293 containerd[1543]: time="2025-01-29T11:29:39.413279531Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\"" Jan 29 11:29:39.413334 containerd[1543]: time="2025-01-29T11:29:39.413318210Z" level=info msg="TearDown network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" successfully" Jan 29 11:29:39.413334 containerd[1543]: time="2025-01-29T11:29:39.413327143Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" returns successfully" Jan 29 11:29:39.413493 containerd[1543]: time="2025-01-29T11:29:39.413479826Z" level=info msg="TearDown network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" successfully" Jan 29 11:29:39.413518 containerd[1543]: time="2025-01-29T11:29:39.413491557Z" level=info msg="StopPodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" returns successfully" Jan 29 11:29:39.413540 containerd[1543]: time="2025-01-29T11:29:39.413534855Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" Jan 29 11:29:39.413586 containerd[1543]: time="2025-01-29T11:29:39.413572926Z" level=info msg="TearDown network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" successfully" Jan 29 11:29:39.413586 containerd[1543]: time="2025-01-29T11:29:39.413581525Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" returns successfully" Jan 29 11:29:39.413757 containerd[1543]: time="2025-01-29T11:29:39.413744205Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\"" Jan 29 11:29:39.413794 containerd[1543]: time="2025-01-29T11:29:39.413784920Z" level=info msg="TearDown network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" successfully" Jan 29 11:29:39.413794 containerd[1543]: time="2025-01-29T11:29:39.413791722Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" returns successfully" Jan 29 11:29:39.414174 containerd[1543]: time="2025-01-29T11:29:39.414161041Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" Jan 29 11:29:39.426398 containerd[1543]: time="2025-01-29T11:29:39.414215422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:4,}" Jan 29 11:29:39.426398 containerd[1543]: time="2025-01-29T11:29:39.414579412Z" level=info msg="TearDown network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" successfully" Jan 29 11:29:39.426398 containerd[1543]: time="2025-01-29T11:29:39.414592265Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" returns successfully" Jan 29 11:29:39.426398 containerd[1543]: time="2025-01-29T11:29:39.414779572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:4,}" Jan 29 11:29:39.493791 kubelet[2780]: I0129 11:29:39.493748 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-v7xft" podStartSLOduration=32.493732876 podStartE2EDuration="32.493732876s" podCreationTimestamp="2025-01-29 11:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:29:39.493564167 +0000 UTC m=+36.279553464" watchObservedRunningTime="2025-01-29 11:29:39.493732876 +0000 UTC m=+36.279722166" Jan 29 11:29:39.500315 kubelet[2780]: I0129 11:29:39.493856 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-z5wzg" podStartSLOduration=32.493850727 podStartE2EDuration="32.493850727s" podCreationTimestamp="2025-01-29 11:29:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:29:39.466373909 +0000 UTC m=+36.252363200" watchObservedRunningTime="2025-01-29 11:29:39.493850727 +0000 UTC m=+36.279840017" Jan 29 11:29:39.615706 kernel: bpftool[4927]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:29:39.661710 systemd-networkd[1242]: calief4989be211: Link UP Jan 29 11:29:39.661939 systemd-networkd[1242]: calief4989be211: Gained carrier Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.585 [INFO][4904] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-csi--node--driver--v9zp5-eth0 csi-node-driver- calico-system ad861abe-5500-44ec-ad63-2a1f0ef1f899 758 0 2025-01-29 11:29:13 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s localhost csi-node-driver-v9zp5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calief4989be211 [] []}} ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Namespace="calico-system" Pod="csi-node-driver-v9zp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9zp5-" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.585 [INFO][4904] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Namespace="calico-system" Pod="csi-node-driver-v9zp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.619 [INFO][4928] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" HandleID="k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Workload="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.629 [INFO][4928] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" HandleID="k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Workload="localhost-k8s-csi--node--driver--v9zp5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031a880), Attrs:map[string]string{"namespace":"calico-system", "node":"localhost", "pod":"csi-node-driver-v9zp5", "timestamp":"2025-01-29 11:29:39.618502405 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.629 [INFO][4928] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.629 [INFO][4928] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.629 [INFO][4928] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.632 [INFO][4928] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.635 [INFO][4928] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.639 [INFO][4928] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.642 [INFO][4928] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.645 [INFO][4928] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.646 [INFO][4928] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.647 [INFO][4928] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235 Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.650 [INFO][4928] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.657 [INFO][4928] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.133/26] block=192.168.88.128/26 handle="k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.657 [INFO][4928] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.133/26] handle="k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" host="localhost" Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.657 [INFO][4928] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:29:39.685638 containerd[1543]: 2025-01-29 11:29:39.657 [INFO][4928] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.133/26] IPv6=[] ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" HandleID="k8s-pod-network.1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Workload="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:39.686313 containerd[1543]: 2025-01-29 11:29:39.658 [INFO][4904] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Namespace="calico-system" Pod="csi-node-driver-v9zp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9zp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v9zp5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad861abe-5500-44ec-ad63-2a1f0ef1f899", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"csi-node-driver-v9zp5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calief4989be211", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:39.686313 containerd[1543]: 2025-01-29 11:29:39.658 [INFO][4904] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.133/32] ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Namespace="calico-system" Pod="csi-node-driver-v9zp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:39.686313 containerd[1543]: 2025-01-29 11:29:39.658 [INFO][4904] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calief4989be211 ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Namespace="calico-system" Pod="csi-node-driver-v9zp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:39.686313 containerd[1543]: 2025-01-29 11:29:39.661 [INFO][4904] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Namespace="calico-system" Pod="csi-node-driver-v9zp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:39.686313 containerd[1543]: 2025-01-29 11:29:39.663 [INFO][4904] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Namespace="calico-system" Pod="csi-node-driver-v9zp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9zp5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-csi--node--driver--v9zp5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"ad861abe-5500-44ec-ad63-2a1f0ef1f899", ResourceVersion:"758", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235", Pod:"csi-node-driver-v9zp5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.88.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calief4989be211", MAC:"22:42:59:32:b4:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:39.686313 containerd[1543]: 2025-01-29 11:29:39.676 [INFO][4904] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235" Namespace="calico-system" Pod="csi-node-driver-v9zp5" WorkloadEndpoint="localhost-k8s-csi--node--driver--v9zp5-eth0" Jan 29 11:29:39.716187 containerd[1543]: time="2025-01-29T11:29:39.716114790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:39.716318 containerd[1543]: time="2025-01-29T11:29:39.716200969Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:39.716318 containerd[1543]: time="2025-01-29T11:29:39.716212044Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:39.716604 containerd[1543]: time="2025-01-29T11:29:39.716388398Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:39.736453 systemd[1]: Started cri-containerd-1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235.scope - libcontainer container 1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235. Jan 29 11:29:39.763232 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:29:39.764621 systemd-networkd[1242]: calidff7c68b982: Link UP Jan 29 11:29:39.765972 systemd-networkd[1242]: calidff7c68b982: Gained carrier Jan 29 11:29:39.781887 containerd[1543]: time="2025-01-29T11:29:39.781859604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-v9zp5,Uid:ad861abe-5500-44ec-ad63-2a1f0ef1f899,Namespace:calico-system,Attempt:4,} returns sandbox id \"1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235\"" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.627 [INFO][4902] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0 calico-apiserver-f8fc55c6c- calico-apiserver 5c59bfc6-a9c3-4ac8-8013-4f73cd38040c 759 0 2025-01-29 11:29:12 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:f8fc55c6c projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s localhost calico-apiserver-f8fc55c6c-8vp99 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidff7c68b982 [] []}} ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-8vp99" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.627 [INFO][4902] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-8vp99" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.680 [INFO][4943] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" HandleID="k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.713 [INFO][4943] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" HandleID="k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050a20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"localhost", "pod":"calico-apiserver-f8fc55c6c-8vp99", "timestamp":"2025-01-29 11:29:39.673315078 +0000 UTC"}, Hostname:"localhost", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.713 [INFO][4943] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.713 [INFO][4943] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.713 [INFO][4943] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'localhost' Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.733 [INFO][4943] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.736 [INFO][4943] ipam/ipam.go 372: Looking up existing affinities for host host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.740 [INFO][4943] ipam/ipam.go 489: Trying affinity for 192.168.88.128/26 host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.742 [INFO][4943] ipam/ipam.go 155: Attempting to load block cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.744 [INFO][4943] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.88.128/26 host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.744 [INFO][4943] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.88.128/26 handle="k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.745 [INFO][4943] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22 Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.749 [INFO][4943] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.88.128/26 handle="k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.756 [INFO][4943] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.88.134/26] block=192.168.88.128/26 handle="k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.756 [INFO][4943] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.88.134/26] handle="k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" host="localhost" Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.756 [INFO][4943] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:29:39.784629 containerd[1543]: 2025-01-29 11:29:39.756 [INFO][4943] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.88.134/26] IPv6=[] ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" HandleID="k8s-pod-network.bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Workload="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:39.785698 containerd[1543]: 2025-01-29 11:29:39.760 [INFO][4902] cni-plugin/k8s.go 386: Populated endpoint ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-8vp99" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0", GenerateName:"calico-apiserver-f8fc55c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c59bfc6-a9c3-4ac8-8013-4f73cd38040c", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8fc55c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"", Pod:"calico-apiserver-f8fc55c6c-8vp99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidff7c68b982", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:39.785698 containerd[1543]: 2025-01-29 11:29:39.761 [INFO][4902] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.88.134/32] ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-8vp99" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:39.785698 containerd[1543]: 2025-01-29 11:29:39.761 [INFO][4902] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidff7c68b982 ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-8vp99" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:39.785698 containerd[1543]: 2025-01-29 11:29:39.769 [INFO][4902] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-8vp99" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:39.785698 containerd[1543]: 2025-01-29 11:29:39.771 [INFO][4902] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-8vp99" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0", GenerateName:"calico-apiserver-f8fc55c6c-", Namespace:"calico-apiserver", SelfLink:"", UID:"5c59bfc6-a9c3-4ac8-8013-4f73cd38040c", ResourceVersion:"759", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 29, 12, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"f8fc55c6c", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"localhost", ContainerID:"bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22", Pod:"calico-apiserver-f8fc55c6c-8vp99", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.88.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidff7c68b982", MAC:"ce:27:43:49:19:c0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:29:39.785698 containerd[1543]: 2025-01-29 11:29:39.779 [INFO][4902] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22" Namespace="calico-apiserver" Pod="calico-apiserver-f8fc55c6c-8vp99" WorkloadEndpoint="localhost-k8s-calico--apiserver--f8fc55c6c--8vp99-eth0" Jan 29 11:29:39.805328 containerd[1543]: time="2025-01-29T11:29:39.805259073Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:29:39.805328 containerd[1543]: time="2025-01-29T11:29:39.805301907Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:29:39.805328 containerd[1543]: time="2025-01-29T11:29:39.805310043Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:39.805491 containerd[1543]: time="2025-01-29T11:29:39.805359378Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:29:39.820533 systemd[1]: Started cri-containerd-bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22.scope - libcontainer container bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22. Jan 29 11:29:39.830270 systemd-resolved[1468]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Jan 29 11:29:39.855010 containerd[1543]: time="2025-01-29T11:29:39.854879578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-f8fc55c6c-8vp99,Uid:5c59bfc6-a9c3-4ac8-8013-4f73cd38040c,Namespace:calico-apiserver,Attempt:4,} returns sandbox id \"bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22\"" Jan 29 11:29:39.928460 systemd-networkd[1242]: vxlan.calico: Link UP Jan 29 11:29:39.928466 systemd-networkd[1242]: vxlan.calico: Gained carrier Jan 29 11:29:40.170760 systemd-networkd[1242]: cali6aad0f32d47: Gained IPv6LL Jan 29 11:29:40.491737 systemd-networkd[1242]: cali146840157b2: Gained IPv6LL Jan 29 11:29:40.682972 systemd-networkd[1242]: cali18d308c66cd: Gained IPv6LL Jan 29 11:29:40.875120 systemd-networkd[1242]: cali36dfd8aadb7: Gained IPv6LL Jan 29 11:29:41.130990 systemd-networkd[1242]: calidff7c68b982: Gained IPv6LL Jan 29 11:29:41.194747 systemd-networkd[1242]: vxlan.calico: Gained IPv6LL Jan 29 11:29:41.212984 containerd[1543]: time="2025-01-29T11:29:41.212950413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:41.213751 containerd[1543]: time="2025-01-29T11:29:41.213725590Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=42001404" Jan 29 11:29:41.214479 containerd[1543]: time="2025-01-29T11:29:41.214461921Z" level=info msg="ImageCreate event name:\"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:41.215455 containerd[1543]: time="2025-01-29T11:29:41.215431093Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:41.216113 containerd[1543]: time="2025-01-29T11:29:41.215860613Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 2.280532319s" Jan 29 11:29:41.216113 containerd[1543]: time="2025-01-29T11:29:41.215879112Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:29:41.217060 containerd[1543]: time="2025-01-29T11:29:41.217041065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:29:41.217794 containerd[1543]: time="2025-01-29T11:29:41.217718669Z" level=info msg="CreateContainer within sandbox \"e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:29:41.226208 containerd[1543]: time="2025-01-29T11:29:41.226137368Z" level=info msg="CreateContainer within sandbox \"e2f39cc6e195d4968dbfffca7803e4b98d72d973db22bd027c98e9105f2b7116\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"d3fcc65ba2e66c87bcf75e85bb764feb9005f9ab7e123b2066dac3f332168a94\"" Jan 29 11:29:41.226886 containerd[1543]: time="2025-01-29T11:29:41.226590636Z" level=info msg="StartContainer for \"d3fcc65ba2e66c87bcf75e85bb764feb9005f9ab7e123b2066dac3f332168a94\"" Jan 29 11:29:41.248706 systemd[1]: Started cri-containerd-d3fcc65ba2e66c87bcf75e85bb764feb9005f9ab7e123b2066dac3f332168a94.scope - libcontainer container d3fcc65ba2e66c87bcf75e85bb764feb9005f9ab7e123b2066dac3f332168a94. Jan 29 11:29:41.279258 containerd[1543]: time="2025-01-29T11:29:41.279224678Z" level=info msg="StartContainer for \"d3fcc65ba2e66c87bcf75e85bb764feb9005f9ab7e123b2066dac3f332168a94\" returns successfully" Jan 29 11:29:41.437190 kubelet[2780]: I0129 11:29:41.437060 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f8fc55c6c-cvj7f" podStartSLOduration=27.155657755 podStartE2EDuration="29.437047201s" podCreationTimestamp="2025-01-29 11:29:12 +0000 UTC" firstStartedPulling="2025-01-29 11:29:38.935041699 +0000 UTC m=+35.721030986" lastFinishedPulling="2025-01-29 11:29:41.216431145 +0000 UTC m=+38.002420432" observedRunningTime="2025-01-29 11:29:41.436933033 +0000 UTC m=+38.222922323" watchObservedRunningTime="2025-01-29 11:29:41.437047201 +0000 UTC m=+38.223036490" Jan 29 11:29:41.578828 systemd-networkd[1242]: calief4989be211: Gained IPv6LL Jan 29 11:29:42.426827 kubelet[2780]: I0129 11:29:42.426685 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:29:43.613104 containerd[1543]: time="2025-01-29T11:29:43.613062790Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:43.616458 containerd[1543]: time="2025-01-29T11:29:43.616389471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:29:43.617671 containerd[1543]: time="2025-01-29T11:29:43.617640766Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:43.639444 containerd[1543]: time="2025-01-29T11:29:43.638095916Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:43.639444 containerd[1543]: time="2025-01-29T11:29:43.639291101Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 2.422217703s" Jan 29 11:29:43.639444 containerd[1543]: time="2025-01-29T11:29:43.639326835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:29:43.640426 containerd[1543]: time="2025-01-29T11:29:43.640395756Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:29:43.661217 containerd[1543]: time="2025-01-29T11:29:43.661055186Z" level=info msg="CreateContainer within sandbox \"11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:29:43.677953 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2776274494.mount: Deactivated successfully. Jan 29 11:29:43.688009 containerd[1543]: time="2025-01-29T11:29:43.687947240Z" level=info msg="CreateContainer within sandbox \"11a6905644ad06f922a718327d5bfa5ca28bb8336ea652e5771c4decbbd33c9d\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"81de533c6f9914b93a14f90b22f4645f796f7aee951221a33a2072c9db75a2c9\"" Jan 29 11:29:43.688732 containerd[1543]: time="2025-01-29T11:29:43.688572266Z" level=info msg="StartContainer for \"81de533c6f9914b93a14f90b22f4645f796f7aee951221a33a2072c9db75a2c9\"" Jan 29 11:29:43.792804 systemd[1]: Started cri-containerd-81de533c6f9914b93a14f90b22f4645f796f7aee951221a33a2072c9db75a2c9.scope - libcontainer container 81de533c6f9914b93a14f90b22f4645f796f7aee951221a33a2072c9db75a2c9. Jan 29 11:29:43.847739 containerd[1543]: time="2025-01-29T11:29:43.847713049Z" level=info msg="StartContainer for \"81de533c6f9914b93a14f90b22f4645f796f7aee951221a33a2072c9db75a2c9\" returns successfully" Jan 29 11:29:44.714674 kubelet[2780]: I0129 11:29:44.712734 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-6bccbcd89f-p65ch" podStartSLOduration=27.26962272 podStartE2EDuration="31.576460003s" podCreationTimestamp="2025-01-29 11:29:13 +0000 UTC" firstStartedPulling="2025-01-29 11:29:39.333483372 +0000 UTC m=+36.119472659" lastFinishedPulling="2025-01-29 11:29:43.640320652 +0000 UTC m=+40.426309942" observedRunningTime="2025-01-29 11:29:44.576353198 +0000 UTC m=+41.362342495" watchObservedRunningTime="2025-01-29 11:29:44.576460003 +0000 UTC m=+41.362449296" Jan 29 11:29:45.220574 containerd[1543]: time="2025-01-29T11:29:45.220081151Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:45.220574 containerd[1543]: time="2025-01-29T11:29:45.220464276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:29:45.220574 containerd[1543]: time="2025-01-29T11:29:45.220540595Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:45.222271 containerd[1543]: time="2025-01-29T11:29:45.221706671Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:45.222553 containerd[1543]: time="2025-01-29T11:29:45.222105186Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.581683056s" Jan 29 11:29:45.222604 containerd[1543]: time="2025-01-29T11:29:45.222594608Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:29:45.223729 containerd[1543]: time="2025-01-29T11:29:45.223596141Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\"" Jan 29 11:29:45.224771 containerd[1543]: time="2025-01-29T11:29:45.224685646Z" level=info msg="CreateContainer within sandbox \"1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:29:45.238160 containerd[1543]: time="2025-01-29T11:29:45.235796133Z" level=info msg="CreateContainer within sandbox \"1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ec0e7f86d45043f6f8d6733e00cafaec41ae6cff41da8ce915d7f3262672c590\"" Jan 29 11:29:45.238160 containerd[1543]: time="2025-01-29T11:29:45.236202225Z" level=info msg="StartContainer for \"ec0e7f86d45043f6f8d6733e00cafaec41ae6cff41da8ce915d7f3262672c590\"" Jan 29 11:29:45.259717 systemd[1]: Started cri-containerd-ec0e7f86d45043f6f8d6733e00cafaec41ae6cff41da8ce915d7f3262672c590.scope - libcontainer container ec0e7f86d45043f6f8d6733e00cafaec41ae6cff41da8ce915d7f3262672c590. Jan 29 11:29:45.285000 containerd[1543]: time="2025-01-29T11:29:45.284965260Z" level=info msg="StartContainer for \"ec0e7f86d45043f6f8d6733e00cafaec41ae6cff41da8ce915d7f3262672c590\" returns successfully" Jan 29 11:29:45.774630 containerd[1543]: time="2025-01-29T11:29:45.774591227Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/calico/apiserver:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:45.775343 containerd[1543]: time="2025-01-29T11:29:45.775275969Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.29.1: active requests=0, bytes read=77" Jan 29 11:29:45.779131 containerd[1543]: time="2025-01-29T11:29:45.779072750Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" with image id \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\", repo tag \"ghcr.io/flatcar/calico/apiserver:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/apiserver@sha256:b8c43e264fe52e0c327b0bf3ac882a0224b33bdd7f4ff58a74242da7d9b00486\", size \"43494504\" in 555.460052ms" Jan 29 11:29:45.779131 containerd[1543]: time="2025-01-29T11:29:45.779095393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.29.1\" returns image reference \"sha256:421726ace5ed13894f7edf594dd3a462947aedc13d0f69d08525d7369477fb70\"" Jan 29 11:29:45.779871 containerd[1543]: time="2025-01-29T11:29:45.779645705Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:29:45.781225 containerd[1543]: time="2025-01-29T11:29:45.781141861Z" level=info msg="CreateContainer within sandbox \"bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22\" for container &ContainerMetadata{Name:calico-apiserver,Attempt:0,}" Jan 29 11:29:45.788434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3643782045.mount: Deactivated successfully. Jan 29 11:29:45.801397 containerd[1543]: time="2025-01-29T11:29:45.801325118Z" level=info msg="CreateContainer within sandbox \"bfb199fe29d5093e2c3c49c1c862171606fe32a80f52c834d8d9485b97af7f22\" for &ContainerMetadata{Name:calico-apiserver,Attempt:0,} returns container id \"7bab4cd63cf831a28ba0b368dcd93c30e4bc8e78e0c2fa36518a4e6560c3688c\"" Jan 29 11:29:45.802562 containerd[1543]: time="2025-01-29T11:29:45.801762736Z" level=info msg="StartContainer for \"7bab4cd63cf831a28ba0b368dcd93c30e4bc8e78e0c2fa36518a4e6560c3688c\"" Jan 29 11:29:45.824752 systemd[1]: Started cri-containerd-7bab4cd63cf831a28ba0b368dcd93c30e4bc8e78e0c2fa36518a4e6560c3688c.scope - libcontainer container 7bab4cd63cf831a28ba0b368dcd93c30e4bc8e78e0c2fa36518a4e6560c3688c. Jan 29 11:29:45.852586 containerd[1543]: time="2025-01-29T11:29:45.852559650Z" level=info msg="StartContainer for \"7bab4cd63cf831a28ba0b368dcd93c30e4bc8e78e0c2fa36518a4e6560c3688c\" returns successfully" Jan 29 11:29:47.499989 containerd[1543]: time="2025-01-29T11:29:47.499422465Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:47.499989 containerd[1543]: time="2025-01-29T11:29:47.499924017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:29:47.499989 containerd[1543]: time="2025-01-29T11:29:47.499957337Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:47.501178 containerd[1543]: time="2025-01-29T11:29:47.501160014Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:29:47.501609 containerd[1543]: time="2025-01-29T11:29:47.501585708Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.72192308s" Jan 29 11:29:47.501609 containerd[1543]: time="2025-01-29T11:29:47.501604137Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:29:47.503278 containerd[1543]: time="2025-01-29T11:29:47.503260062Z" level=info msg="CreateContainer within sandbox \"1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:29:47.519955 kubelet[2780]: I0129 11:29:47.519937 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:29:47.520661 containerd[1543]: time="2025-01-29T11:29:47.520639472Z" level=info msg="CreateContainer within sandbox \"1a43b5577f7e016df7cb107ae43b9a36d7dc193c1bb6ed2a2e245bd609b05235\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"10760b981506c3be2f1a0e4a8c8ae7c431e4cc24d8d8f3ceb30421f6d540d21a\"" Jan 29 11:29:47.521752 containerd[1543]: time="2025-01-29T11:29:47.520960341Z" level=info msg="StartContainer for \"10760b981506c3be2f1a0e4a8c8ae7c431e4cc24d8d8f3ceb30421f6d540d21a\"" Jan 29 11:29:47.544735 systemd[1]: Started cri-containerd-10760b981506c3be2f1a0e4a8c8ae7c431e4cc24d8d8f3ceb30421f6d540d21a.scope - libcontainer container 10760b981506c3be2f1a0e4a8c8ae7c431e4cc24d8d8f3ceb30421f6d540d21a. Jan 29 11:29:47.563303 containerd[1543]: time="2025-01-29T11:29:47.563278060Z" level=info msg="StartContainer for \"10760b981506c3be2f1a0e4a8c8ae7c431e4cc24d8d8f3ceb30421f6d540d21a\" returns successfully" Jan 29 11:29:48.631320 kubelet[2780]: I0129 11:29:48.631166 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-apiserver/calico-apiserver-f8fc55c6c-8vp99" podStartSLOduration=30.707726426 podStartE2EDuration="36.631155003s" podCreationTimestamp="2025-01-29 11:29:12 +0000 UTC" firstStartedPulling="2025-01-29 11:29:39.856158038 +0000 UTC m=+36.642147326" lastFinishedPulling="2025-01-29 11:29:45.779586616 +0000 UTC m=+42.565575903" observedRunningTime="2025-01-29 11:29:46.530497727 +0000 UTC m=+43.316487016" watchObservedRunningTime="2025-01-29 11:29:48.631155003 +0000 UTC m=+45.417144294" Jan 29 11:29:49.260318 kubelet[2780]: I0129 11:29:49.232197 2780 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:29:49.286123 kubelet[2780]: I0129 11:29:49.286087 2780 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:29:50.008651 kubelet[2780]: I0129 11:29:50.008071 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:29:50.087384 kubelet[2780]: I0129 11:29:50.087193 2780 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-v9zp5" podStartSLOduration=29.368205383 podStartE2EDuration="37.087179966s" podCreationTimestamp="2025-01-29 11:29:13 +0000 UTC" firstStartedPulling="2025-01-29 11:29:39.783304395 +0000 UTC m=+36.569293682" lastFinishedPulling="2025-01-29 11:29:47.502278978 +0000 UTC m=+44.288268265" observedRunningTime="2025-01-29 11:29:48.632540881 +0000 UTC m=+45.418530176" watchObservedRunningTime="2025-01-29 11:29:50.087179966 +0000 UTC m=+46.873169264" Jan 29 11:29:50.891249 kubelet[2780]: I0129 11:29:50.891119 2780 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:30:03.320279 containerd[1543]: time="2025-01-29T11:30:03.320234934Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\"" Jan 29 11:30:03.326969 containerd[1543]: time="2025-01-29T11:30:03.320353519Z" level=info msg="TearDown network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" successfully" Jan 29 11:30:03.327013 containerd[1543]: time="2025-01-29T11:30:03.326967515Z" level=info msg="StopPodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" returns successfully" Jan 29 11:30:03.346903 containerd[1543]: time="2025-01-29T11:30:03.346739886Z" level=info msg="RemovePodSandbox for \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\"" Jan 29 11:30:03.353694 containerd[1543]: time="2025-01-29T11:30:03.353280514Z" level=info msg="Forcibly stopping sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\"" Jan 29 11:30:03.353694 containerd[1543]: time="2025-01-29T11:30:03.353360147Z" level=info msg="TearDown network for sandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" successfully" Jan 29 11:30:03.360286 containerd[1543]: time="2025-01-29T11:30:03.359955405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.368009 containerd[1543]: time="2025-01-29T11:30:03.367728867Z" level=info msg="RemovePodSandbox \"0e21b3d8f8afd4acb8853eaf7055630093d9d5001d30faf6e9f91a7d151d6862\" returns successfully" Jan 29 11:30:03.387050 containerd[1543]: time="2025-01-29T11:30:03.387025263Z" level=info msg="StopPodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\"" Jan 29 11:30:03.387143 containerd[1543]: time="2025-01-29T11:30:03.387109399Z" level=info msg="TearDown network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" successfully" Jan 29 11:30:03.387143 containerd[1543]: time="2025-01-29T11:30:03.387120464Z" level=info msg="StopPodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" returns successfully" Jan 29 11:30:03.388061 containerd[1543]: time="2025-01-29T11:30:03.387325101Z" level=info msg="RemovePodSandbox for \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\"" Jan 29 11:30:03.388061 containerd[1543]: time="2025-01-29T11:30:03.387338086Z" level=info msg="Forcibly stopping sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\"" Jan 29 11:30:03.388061 containerd[1543]: time="2025-01-29T11:30:03.387375473Z" level=info msg="TearDown network for sandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" successfully" Jan 29 11:30:03.388892 containerd[1543]: time="2025-01-29T11:30:03.388719806Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.388892 containerd[1543]: time="2025-01-29T11:30:03.388751909Z" level=info msg="RemovePodSandbox \"29d1e1e67af3d0e9c992f6da05b5aaab1775d6f881e285626bfde2fb51b9861c\" returns successfully" Jan 29 11:30:03.388934 containerd[1543]: time="2025-01-29T11:30:03.388895415Z" level=info msg="StopPodSandbox for \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\"" Jan 29 11:30:03.388968 containerd[1543]: time="2025-01-29T11:30:03.388955701Z" level=info msg="TearDown network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\" successfully" Jan 29 11:30:03.388968 containerd[1543]: time="2025-01-29T11:30:03.388965555Z" level=info msg="StopPodSandbox for \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\" returns successfully" Jan 29 11:30:03.389105 containerd[1543]: time="2025-01-29T11:30:03.389090722Z" level=info msg="RemovePodSandbox for \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\"" Jan 29 11:30:03.389129 containerd[1543]: time="2025-01-29T11:30:03.389107335Z" level=info msg="Forcibly stopping sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\"" Jan 29 11:30:03.389172 containerd[1543]: time="2025-01-29T11:30:03.389145215Z" level=info msg="TearDown network for sandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\" successfully" Jan 29 11:30:03.390697 containerd[1543]: time="2025-01-29T11:30:03.390490096Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.390697 containerd[1543]: time="2025-01-29T11:30:03.390516331Z" level=info msg="RemovePodSandbox \"2eadd6e3a2a049291b8168dc25f437d096fc4e663498d58251148419c84e5523\" returns successfully" Jan 29 11:30:03.390697 containerd[1543]: time="2025-01-29T11:30:03.390680314Z" level=info msg="StopPodSandbox for \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\"" Jan 29 11:30:03.391553 containerd[1543]: time="2025-01-29T11:30:03.390777867Z" level=info msg="TearDown network for sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\" successfully" Jan 29 11:30:03.391553 containerd[1543]: time="2025-01-29T11:30:03.390796306Z" level=info msg="StopPodSandbox for \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\" returns successfully" Jan 29 11:30:03.391553 containerd[1543]: time="2025-01-29T11:30:03.390910023Z" level=info msg="RemovePodSandbox for \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\"" Jan 29 11:30:03.391553 containerd[1543]: time="2025-01-29T11:30:03.390925135Z" level=info msg="Forcibly stopping sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\"" Jan 29 11:30:03.391553 containerd[1543]: time="2025-01-29T11:30:03.391005509Z" level=info msg="TearDown network for sandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\" successfully" Jan 29 11:30:03.392356 containerd[1543]: time="2025-01-29T11:30:03.392259179Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.392356 containerd[1543]: time="2025-01-29T11:30:03.392286622Z" level=info msg="RemovePodSandbox \"8676020bc7e9ced14f3e4a8fbfcef1902bdc2e9b62222fed7c56662e798ccbfc\" returns successfully" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.392470817Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.392574861Z" level=info msg="TearDown network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" successfully" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.392582878Z" level=info msg="StopPodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" returns successfully" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.392740520Z" level=info msg="RemovePodSandbox for \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.392753553Z" level=info msg="Forcibly stopping sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\"" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.392844303Z" level=info msg="TearDown network for sandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" successfully" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.394052475Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.394078349Z" level=info msg="RemovePodSandbox \"b4f0e8faf50ed15bee0abed9f91fb6b2e15a4c81e795cac4a007218f8bde60f3\" returns successfully" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.394220476Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\"" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.394308642Z" level=info msg="TearDown network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" successfully" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.394315671Z" level=info msg="StopPodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" returns successfully" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.394463602Z" level=info msg="RemovePodSandbox for \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\"" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.394473202Z" level=info msg="Forcibly stopping sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\"" Jan 29 11:30:03.394791 containerd[1543]: time="2025-01-29T11:30:03.394547682Z" level=info msg="TearDown network for sandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" successfully" Jan 29 11:30:03.397001 containerd[1543]: time="2025-01-29T11:30:03.395957544Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.397001 containerd[1543]: time="2025-01-29T11:30:03.395995080Z" level=info msg="RemovePodSandbox \"fd1eadf5206c8cd53c4a1ef66c21214899e6365f6d5ef93833547b5bef569865\" returns successfully" Jan 29 11:30:03.397001 containerd[1543]: time="2025-01-29T11:30:03.396130958Z" level=info msg="StopPodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\"" Jan 29 11:30:03.397001 containerd[1543]: time="2025-01-29T11:30:03.396203308Z" level=info msg="TearDown network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" successfully" Jan 29 11:30:03.397001 containerd[1543]: time="2025-01-29T11:30:03.396210025Z" level=info msg="StopPodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" returns successfully" Jan 29 11:30:03.397001 containerd[1543]: time="2025-01-29T11:30:03.396400078Z" level=info msg="RemovePodSandbox for \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\"" Jan 29 11:30:03.397001 containerd[1543]: time="2025-01-29T11:30:03.396409816Z" level=info msg="Forcibly stopping sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\"" Jan 29 11:30:03.397001 containerd[1543]: time="2025-01-29T11:30:03.396486820Z" level=info msg="TearDown network for sandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" successfully" Jan 29 11:30:03.404658 containerd[1543]: time="2025-01-29T11:30:03.404546778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.404931 containerd[1543]: time="2025-01-29T11:30:03.404631824Z" level=info msg="RemovePodSandbox \"3500b349d9e56990046dc3f9b2de6c1103c374e0845fe69b4679c2220ae55b85\" returns successfully" Jan 29 11:30:03.408908 containerd[1543]: time="2025-01-29T11:30:03.408885780Z" level=info msg="StopPodSandbox for \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\"" Jan 29 11:30:03.409067 containerd[1543]: time="2025-01-29T11:30:03.409036978Z" level=info msg="TearDown network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\" successfully" Jan 29 11:30:03.409196 containerd[1543]: time="2025-01-29T11:30:03.409146579Z" level=info msg="StopPodSandbox for \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\" returns successfully" Jan 29 11:30:03.409804 containerd[1543]: time="2025-01-29T11:30:03.409397075Z" level=info msg="RemovePodSandbox for \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\"" Jan 29 11:30:03.409804 containerd[1543]: time="2025-01-29T11:30:03.409410330Z" level=info msg="Forcibly stopping sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\"" Jan 29 11:30:03.409804 containerd[1543]: time="2025-01-29T11:30:03.409444065Z" level=info msg="TearDown network for sandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\" successfully" Jan 29 11:30:03.411010 containerd[1543]: time="2025-01-29T11:30:03.410987112Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.411046 containerd[1543]: time="2025-01-29T11:30:03.411018553Z" level=info msg="RemovePodSandbox \"7467e873a9a000ed6fe591ccaf22e3ed81648c1345627ec2e152f18bd41776ca\" returns successfully" Jan 29 11:30:03.411404 containerd[1543]: time="2025-01-29T11:30:03.411287614Z" level=info msg="StopPodSandbox for \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\"" Jan 29 11:30:03.412570 containerd[1543]: time="2025-01-29T11:30:03.411372169Z" level=info msg="TearDown network for sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\" successfully" Jan 29 11:30:03.412570 containerd[1543]: time="2025-01-29T11:30:03.411486802Z" level=info msg="StopPodSandbox for \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\" returns successfully" Jan 29 11:30:03.412570 containerd[1543]: time="2025-01-29T11:30:03.411632718Z" level=info msg="RemovePodSandbox for \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\"" Jan 29 11:30:03.412570 containerd[1543]: time="2025-01-29T11:30:03.411645156Z" level=info msg="Forcibly stopping sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\"" Jan 29 11:30:03.412570 containerd[1543]: time="2025-01-29T11:30:03.411678588Z" level=info msg="TearDown network for sandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\" successfully" Jan 29 11:30:03.414796 containerd[1543]: time="2025-01-29T11:30:03.414775799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.415733 containerd[1543]: time="2025-01-29T11:30:03.414805680Z" level=info msg="RemovePodSandbox \"27663650868ee15e5b14142b643ed5bad73053c5dc36bdbe2022e62f1e0a0eea\" returns successfully" Jan 29 11:30:03.415733 containerd[1543]: time="2025-01-29T11:30:03.415006233Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" Jan 29 11:30:03.415733 containerd[1543]: time="2025-01-29T11:30:03.415064315Z" level=info msg="TearDown network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" successfully" Jan 29 11:30:03.415733 containerd[1543]: time="2025-01-29T11:30:03.415110233Z" level=info msg="StopPodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" returns successfully" Jan 29 11:30:03.417177 containerd[1543]: time="2025-01-29T11:30:03.415764369Z" level=info msg="RemovePodSandbox for \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" Jan 29 11:30:03.417177 containerd[1543]: time="2025-01-29T11:30:03.415775157Z" level=info msg="Forcibly stopping sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\"" Jan 29 11:30:03.417177 containerd[1543]: time="2025-01-29T11:30:03.415813254Z" level=info msg="TearDown network for sandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" successfully" Jan 29 11:30:03.421555 containerd[1543]: time="2025-01-29T11:30:03.421528496Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.422241 containerd[1543]: time="2025-01-29T11:30:03.421578541Z" level=info msg="RemovePodSandbox \"7879e8e66bca99f05db383134785ae8a457c0cf15c7c4919edf27c4e58b012cd\" returns successfully" Jan 29 11:30:03.422441 containerd[1543]: time="2025-01-29T11:30:03.422357284Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\"" Jan 29 11:30:03.422712 containerd[1543]: time="2025-01-29T11:30:03.422658754Z" level=info msg="TearDown network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" successfully" Jan 29 11:30:03.422712 containerd[1543]: time="2025-01-29T11:30:03.422668667Z" level=info msg="StopPodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" returns successfully" Jan 29 11:30:03.423071 containerd[1543]: time="2025-01-29T11:30:03.423040305Z" level=info msg="RemovePodSandbox for \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\"" Jan 29 11:30:03.423071 containerd[1543]: time="2025-01-29T11:30:03.423053886Z" level=info msg="Forcibly stopping sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\"" Jan 29 11:30:03.423279 containerd[1543]: time="2025-01-29T11:30:03.423209129Z" level=info msg="TearDown network for sandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" successfully" Jan 29 11:30:03.424642 containerd[1543]: time="2025-01-29T11:30:03.424598609Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.424706 containerd[1543]: time="2025-01-29T11:30:03.424658758Z" level=info msg="RemovePodSandbox \"ca36f2be98657cffbbb3e6825a5691fa122db25f729f54cd86a95373c706af1c\" returns successfully" Jan 29 11:30:03.424879 containerd[1543]: time="2025-01-29T11:30:03.424867632Z" level=info msg="StopPodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\"" Jan 29 11:30:03.425058 containerd[1543]: time="2025-01-29T11:30:03.425001676Z" level=info msg="TearDown network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" successfully" Jan 29 11:30:03.425058 containerd[1543]: time="2025-01-29T11:30:03.425017801Z" level=info msg="StopPodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" returns successfully" Jan 29 11:30:03.425841 containerd[1543]: time="2025-01-29T11:30:03.425172633Z" level=info msg="RemovePodSandbox for \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\"" Jan 29 11:30:03.425841 containerd[1543]: time="2025-01-29T11:30:03.425184782Z" level=info msg="Forcibly stopping sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\"" Jan 29 11:30:03.425841 containerd[1543]: time="2025-01-29T11:30:03.425214431Z" level=info msg="TearDown network for sandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" successfully" Jan 29 11:30:03.426888 containerd[1543]: time="2025-01-29T11:30:03.426844805Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.426888 containerd[1543]: time="2025-01-29T11:30:03.426872569Z" level=info msg="RemovePodSandbox \"e19a8c4b8a81ed9013b30c37b5a01742e5557d745e671dba88669f5bea3ec310\" returns successfully" Jan 29 11:30:03.427163 containerd[1543]: time="2025-01-29T11:30:03.427068502Z" level=info msg="StopPodSandbox for \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\"" Jan 29 11:30:03.427163 containerd[1543]: time="2025-01-29T11:30:03.427124812Z" level=info msg="TearDown network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\" successfully" Jan 29 11:30:03.427163 containerd[1543]: time="2025-01-29T11:30:03.427131931Z" level=info msg="StopPodSandbox for \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\" returns successfully" Jan 29 11:30:03.427450 containerd[1543]: time="2025-01-29T11:30:03.427351242Z" level=info msg="RemovePodSandbox for \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\"" Jan 29 11:30:03.427450 containerd[1543]: time="2025-01-29T11:30:03.427384689Z" level=info msg="Forcibly stopping sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\"" Jan 29 11:30:03.427450 containerd[1543]: time="2025-01-29T11:30:03.427418270Z" level=info msg="TearDown network for sandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\" successfully" Jan 29 11:30:03.428779 containerd[1543]: time="2025-01-29T11:30:03.428671517Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.428779 containerd[1543]: time="2025-01-29T11:30:03.428729374Z" level=info msg="RemovePodSandbox \"ff36e156383d905ea44dd2fe4cc975b52daa72fc976d0fae826b6571f2843c02\" returns successfully" Jan 29 11:30:03.429702 containerd[1543]: time="2025-01-29T11:30:03.428913023Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" Jan 29 11:30:03.429702 containerd[1543]: time="2025-01-29T11:30:03.428954556Z" level=info msg="TearDown network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" successfully" Jan 29 11:30:03.429702 containerd[1543]: time="2025-01-29T11:30:03.428960483Z" level=info msg="StopPodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" returns successfully" Jan 29 11:30:03.429702 containerd[1543]: time="2025-01-29T11:30:03.429140204Z" level=info msg="RemovePodSandbox for \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" Jan 29 11:30:03.429702 containerd[1543]: time="2025-01-29T11:30:03.429152592Z" level=info msg="Forcibly stopping sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\"" Jan 29 11:30:03.429702 containerd[1543]: time="2025-01-29T11:30:03.429183949Z" level=info msg="TearDown network for sandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" successfully" Jan 29 11:30:03.430271 containerd[1543]: time="2025-01-29T11:30:03.430251695Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.430307 containerd[1543]: time="2025-01-29T11:30:03.430275002Z" level=info msg="RemovePodSandbox \"1335c4c1be4c603e540ffc9341ca82a3b73d4dcb2990fb3f252db9c214b08658\" returns successfully" Jan 29 11:30:03.430538 containerd[1543]: time="2025-01-29T11:30:03.430457748Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\"" Jan 29 11:30:03.430538 containerd[1543]: time="2025-01-29T11:30:03.430507815Z" level=info msg="TearDown network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" successfully" Jan 29 11:30:03.430538 containerd[1543]: time="2025-01-29T11:30:03.430514483Z" level=info msg="StopPodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" returns successfully" Jan 29 11:30:03.431153 containerd[1543]: time="2025-01-29T11:30:03.430722871Z" level=info msg="RemovePodSandbox for \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\"" Jan 29 11:30:03.431153 containerd[1543]: time="2025-01-29T11:30:03.430735586Z" level=info msg="Forcibly stopping sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\"" Jan 29 11:30:03.431153 containerd[1543]: time="2025-01-29T11:30:03.430773465Z" level=info msg="TearDown network for sandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" successfully" Jan 29 11:30:03.435880 containerd[1543]: time="2025-01-29T11:30:03.435851916Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.435954 containerd[1543]: time="2025-01-29T11:30:03.435895038Z" level=info msg="RemovePodSandbox \"98687d2218f6a2b009c5a2145dcb1ce73b4872a8ab06bcbeeacdd1d600a5054d\" returns successfully" Jan 29 11:30:03.436415 containerd[1543]: time="2025-01-29T11:30:03.436196825Z" level=info msg="StopPodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\"" Jan 29 11:30:03.436415 containerd[1543]: time="2025-01-29T11:30:03.436259512Z" level=info msg="TearDown network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" successfully" Jan 29 11:30:03.436415 containerd[1543]: time="2025-01-29T11:30:03.436266489Z" level=info msg="StopPodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" returns successfully" Jan 29 11:30:03.436696 containerd[1543]: time="2025-01-29T11:30:03.436568536Z" level=info msg="RemovePodSandbox for \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\"" Jan 29 11:30:03.436696 containerd[1543]: time="2025-01-29T11:30:03.436582419Z" level=info msg="Forcibly stopping sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\"" Jan 29 11:30:03.436847 containerd[1543]: time="2025-01-29T11:30:03.436663373Z" level=info msg="TearDown network for sandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" successfully" Jan 29 11:30:03.444094 containerd[1543]: time="2025-01-29T11:30:03.444021019Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.444094 containerd[1543]: time="2025-01-29T11:30:03.444050454Z" level=info msg="RemovePodSandbox \"dc7b8f34b80a43e6615c97db52703c81a6060e99cea5f8607d67063c0d15d690\" returns successfully" Jan 29 11:30:03.444506 containerd[1543]: time="2025-01-29T11:30:03.444359028Z" level=info msg="StopPodSandbox for \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\"" Jan 29 11:30:03.444506 containerd[1543]: time="2025-01-29T11:30:03.444406889Z" level=info msg="TearDown network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\" successfully" Jan 29 11:30:03.444506 containerd[1543]: time="2025-01-29T11:30:03.444412970Z" level=info msg="StopPodSandbox for \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\" returns successfully" Jan 29 11:30:03.444734 containerd[1543]: time="2025-01-29T11:30:03.444724890Z" level=info msg="RemovePodSandbox for \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\"" Jan 29 11:30:03.444781 containerd[1543]: time="2025-01-29T11:30:03.444773646Z" level=info msg="Forcibly stopping sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\"" Jan 29 11:30:03.444871 containerd[1543]: time="2025-01-29T11:30:03.444837165Z" level=info msg="TearDown network for sandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\" successfully" Jan 29 11:30:03.446191 containerd[1543]: time="2025-01-29T11:30:03.446138188Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.446191 containerd[1543]: time="2025-01-29T11:30:03.446158533Z" level=info msg="RemovePodSandbox \"16dceff0c83213b56eaf857d06a148b80e73e81fc135d1ff0aae7aa6a7d4a3ec\" returns successfully" Jan 29 11:30:03.446325 containerd[1543]: time="2025-01-29T11:30:03.446310062Z" level=info msg="StopPodSandbox for \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\"" Jan 29 11:30:03.446371 containerd[1543]: time="2025-01-29T11:30:03.446358383Z" level=info msg="TearDown network for sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\" successfully" Jan 29 11:30:03.446371 containerd[1543]: time="2025-01-29T11:30:03.446369088Z" level=info msg="StopPodSandbox for \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\" returns successfully" Jan 29 11:30:03.446509 containerd[1543]: time="2025-01-29T11:30:03.446496269Z" level=info msg="RemovePodSandbox for \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\"" Jan 29 11:30:03.446535 containerd[1543]: time="2025-01-29T11:30:03.446510293Z" level=info msg="Forcibly stopping sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\"" Jan 29 11:30:03.446608 containerd[1543]: time="2025-01-29T11:30:03.446585566Z" level=info msg="TearDown network for sandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\" successfully" Jan 29 11:30:03.447602 containerd[1543]: time="2025-01-29T11:30:03.447586564Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.447658 containerd[1543]: time="2025-01-29T11:30:03.447609873Z" level=info msg="RemovePodSandbox \"805283f302690830f7a88987c313107329845fbbd9a4297506b3397a13c61110\" returns successfully" Jan 29 11:30:03.447842 containerd[1543]: time="2025-01-29T11:30:03.447795231Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" Jan 29 11:30:03.448670 containerd[1543]: time="2025-01-29T11:30:03.447842075Z" level=info msg="TearDown network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" successfully" Jan 29 11:30:03.448670 containerd[1543]: time="2025-01-29T11:30:03.447848372Z" level=info msg="StopPodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" returns successfully" Jan 29 11:30:03.448670 containerd[1543]: time="2025-01-29T11:30:03.447967737Z" level=info msg="RemovePodSandbox for \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" Jan 29 11:30:03.448670 containerd[1543]: time="2025-01-29T11:30:03.447978770Z" level=info msg="Forcibly stopping sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\"" Jan 29 11:30:03.448670 containerd[1543]: time="2025-01-29T11:30:03.448022290Z" level=info msg="TearDown network for sandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" successfully" Jan 29 11:30:03.449033 containerd[1543]: time="2025-01-29T11:30:03.449017658Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.451538 containerd[1543]: time="2025-01-29T11:30:03.451520710Z" level=info msg="RemovePodSandbox \"278001aa52062ee6fbf5d00b498751ede55fa1853804b1a4b028e9dda1639d35\" returns successfully" Jan 29 11:30:03.451784 containerd[1543]: time="2025-01-29T11:30:03.451773754Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\"" Jan 29 11:30:03.451866 containerd[1543]: time="2025-01-29T11:30:03.451858043Z" level=info msg="TearDown network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" successfully" Jan 29 11:30:03.451936 containerd[1543]: time="2025-01-29T11:30:03.451902317Z" level=info msg="StopPodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" returns successfully" Jan 29 11:30:03.452763 containerd[1543]: time="2025-01-29T11:30:03.452017087Z" level=info msg="RemovePodSandbox for \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\"" Jan 29 11:30:03.452763 containerd[1543]: time="2025-01-29T11:30:03.452028520Z" level=info msg="Forcibly stopping sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\"" Jan 29 11:30:03.452763 containerd[1543]: time="2025-01-29T11:30:03.452060410Z" level=info msg="TearDown network for sandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" successfully" Jan 29 11:30:03.453200 containerd[1543]: time="2025-01-29T11:30:03.453188017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.453257 containerd[1543]: time="2025-01-29T11:30:03.453248186Z" level=info msg="RemovePodSandbox \"6033264e1976f1129ef2155afd3527be241324be2c025fd1f2f05088c51af8d0\" returns successfully" Jan 29 11:30:03.453427 containerd[1543]: time="2025-01-29T11:30:03.453412231Z" level=info msg="StopPodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\"" Jan 29 11:30:03.453478 containerd[1543]: time="2025-01-29T11:30:03.453467679Z" level=info msg="TearDown network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" successfully" Jan 29 11:30:03.453506 containerd[1543]: time="2025-01-29T11:30:03.453500716Z" level=info msg="StopPodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" returns successfully" Jan 29 11:30:03.454313 containerd[1543]: time="2025-01-29T11:30:03.453628052Z" level=info msg="RemovePodSandbox for \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\"" Jan 29 11:30:03.454313 containerd[1543]: time="2025-01-29T11:30:03.453639137Z" level=info msg="Forcibly stopping sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\"" Jan 29 11:30:03.454313 containerd[1543]: time="2025-01-29T11:30:03.453674229Z" level=info msg="TearDown network for sandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" successfully" Jan 29 11:30:03.455310 containerd[1543]: time="2025-01-29T11:30:03.455295090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.455334 containerd[1543]: time="2025-01-29T11:30:03.455319511Z" level=info msg="RemovePodSandbox \"391640c93f25a8a019a5df373a97b7898d420c3b6ea996c747098a5c60b15bfc\" returns successfully" Jan 29 11:30:03.455544 containerd[1543]: time="2025-01-29T11:30:03.455456945Z" level=info msg="StopPodSandbox for \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\"" Jan 29 11:30:03.455544 containerd[1543]: time="2025-01-29T11:30:03.455497676Z" level=info msg="TearDown network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\" successfully" Jan 29 11:30:03.455544 containerd[1543]: time="2025-01-29T11:30:03.455503884Z" level=info msg="StopPodSandbox for \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\" returns successfully" Jan 29 11:30:03.455697 containerd[1543]: time="2025-01-29T11:30:03.455678208Z" level=info msg="RemovePodSandbox for \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\"" Jan 29 11:30:03.455774 containerd[1543]: time="2025-01-29T11:30:03.455765335Z" level=info msg="Forcibly stopping sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\"" Jan 29 11:30:03.455858 containerd[1543]: time="2025-01-29T11:30:03.455841093Z" level=info msg="TearDown network for sandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\" successfully" Jan 29 11:30:03.457003 containerd[1543]: time="2025-01-29T11:30:03.456883657Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.457003 containerd[1543]: time="2025-01-29T11:30:03.456904082Z" level=info msg="RemovePodSandbox \"b0dfc9d2b30b684f3230be5accdc607a2fd968dd99ec092b6774b94c59338394\" returns successfully" Jan 29 11:30:03.457071 containerd[1543]: time="2025-01-29T11:30:03.457035818Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\"" Jan 29 11:30:03.457094 containerd[1543]: time="2025-01-29T11:30:03.457074157Z" level=info msg="TearDown network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" successfully" Jan 29 11:30:03.457094 containerd[1543]: time="2025-01-29T11:30:03.457080193Z" level=info msg="StopPodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" returns successfully" Jan 29 11:30:03.457488 containerd[1543]: time="2025-01-29T11:30:03.457401352Z" level=info msg="RemovePodSandbox for \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\"" Jan 29 11:30:03.457488 containerd[1543]: time="2025-01-29T11:30:03.457411009Z" level=info msg="Forcibly stopping sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\"" Jan 29 11:30:03.457539 containerd[1543]: time="2025-01-29T11:30:03.457482042Z" level=info msg="TearDown network for sandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" successfully" Jan 29 11:30:03.458489 containerd[1543]: time="2025-01-29T11:30:03.458473444Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.458519 containerd[1543]: time="2025-01-29T11:30:03.458494988Z" level=info msg="RemovePodSandbox \"eb5b8233f88ec26fc4ce589382094db86b20d454323bc6d4defe25ea1e363500\" returns successfully" Jan 29 11:30:03.458690 containerd[1543]: time="2025-01-29T11:30:03.458676984Z" level=info msg="StopPodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\"" Jan 29 11:30:03.458726 containerd[1543]: time="2025-01-29T11:30:03.458719687Z" level=info msg="TearDown network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" successfully" Jan 29 11:30:03.458812 containerd[1543]: time="2025-01-29T11:30:03.458725896Z" level=info msg="StopPodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" returns successfully" Jan 29 11:30:03.458843 containerd[1543]: time="2025-01-29T11:30:03.458832829Z" level=info msg="RemovePodSandbox for \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\"" Jan 29 11:30:03.458864 containerd[1543]: time="2025-01-29T11:30:03.458842792Z" level=info msg="Forcibly stopping sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\"" Jan 29 11:30:03.458895 containerd[1543]: time="2025-01-29T11:30:03.458870211Z" level=info msg="TearDown network for sandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" successfully" Jan 29 11:30:03.459869 containerd[1543]: time="2025-01-29T11:30:03.459852425Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.459896 containerd[1543]: time="2025-01-29T11:30:03.459874341Z" level=info msg="RemovePodSandbox \"64af12101239bda77bc09127055af2af5a4875930ddbf3d24e8576f8092f4ad8\" returns successfully" Jan 29 11:30:03.460041 containerd[1543]: time="2025-01-29T11:30:03.460026828Z" level=info msg="StopPodSandbox for \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\"" Jan 29 11:30:03.460090 containerd[1543]: time="2025-01-29T11:30:03.460068897Z" level=info msg="TearDown network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\" successfully" Jan 29 11:30:03.460090 containerd[1543]: time="2025-01-29T11:30:03.460074626Z" level=info msg="StopPodSandbox for \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\" returns successfully" Jan 29 11:30:03.460825 containerd[1543]: time="2025-01-29T11:30:03.460230740Z" level=info msg="RemovePodSandbox for \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\"" Jan 29 11:30:03.460825 containerd[1543]: time="2025-01-29T11:30:03.460243995Z" level=info msg="Forcibly stopping sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\"" Jan 29 11:30:03.460825 containerd[1543]: time="2025-01-29T11:30:03.460275282Z" level=info msg="TearDown network for sandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\" successfully" Jan 29 11:30:03.461771 containerd[1543]: time="2025-01-29T11:30:03.461759071Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.461829 containerd[1543]: time="2025-01-29T11:30:03.461821304Z" level=info msg="RemovePodSandbox \"cba1a3be38ef9b176260629671a5a04d891226b450a37875f671804bc183fe23\" returns successfully" Jan 29 11:30:03.462001 containerd[1543]: time="2025-01-29T11:30:03.461985613Z" level=info msg="StopPodSandbox for \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\"" Jan 29 11:30:03.462034 containerd[1543]: time="2025-01-29T11:30:03.462027343Z" level=info msg="TearDown network for sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\" successfully" Jan 29 11:30:03.462034 containerd[1543]: time="2025-01-29T11:30:03.462033116Z" level=info msg="StopPodSandbox for \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\" returns successfully" Jan 29 11:30:03.462161 containerd[1543]: time="2025-01-29T11:30:03.462146244Z" level=info msg="RemovePodSandbox for \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\"" Jan 29 11:30:03.462161 containerd[1543]: time="2025-01-29T11:30:03.462156405Z" level=info msg="Forcibly stopping sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\"" Jan 29 11:30:03.462196 containerd[1543]: time="2025-01-29T11:30:03.462182363Z" level=info msg="TearDown network for sandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\" successfully" Jan 29 11:30:03.463278 containerd[1543]: time="2025-01-29T11:30:03.463263994Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:30:03.463308 containerd[1543]: time="2025-01-29T11:30:03.463284066Z" level=info msg="RemovePodSandbox \"9ab35698abedf4bcb0d0ba8167d964d1f85e29f78a91706d03070270e09dba42\" returns successfully" Jan 29 11:30:08.445042 systemd[1]: run-containerd-runc-k8s.io-2489e7eedc570faf06550b0aca6010a9aab56417644d4058fb8f13713084337b-runc.eXGxby.mount: Deactivated successfully. Jan 29 11:30:14.525086 systemd[1]: run-containerd-runc-k8s.io-81de533c6f9914b93a14f90b22f4645f796f7aee951221a33a2072c9db75a2c9-runc.iC2hKf.mount: Deactivated successfully. Jan 29 11:30:38.085347 systemd[1]: Started sshd@7-139.178.70.104:22-139.178.89.65:33340.service - OpenSSH per-connection server daemon (139.178.89.65:33340). Jan 29 11:30:38.351846 sshd[5508]: Accepted publickey for core from 139.178.89.65 port 33340 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:30:38.354478 sshd-session[5508]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:30:38.358233 systemd-logind[1523]: New session 10 of user core. Jan 29 11:30:38.368796 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 29 11:30:39.357604 sshd[5510]: Connection closed by 139.178.89.65 port 33340 Jan 29 11:30:39.359720 systemd[1]: sshd@7-139.178.70.104:22-139.178.89.65:33340.service: Deactivated successfully. Jan 29 11:30:39.358120 sshd-session[5508]: pam_unix(sshd:session): session closed for user core Jan 29 11:30:39.361277 systemd[1]: session-10.scope: Deactivated successfully. Jan 29 11:30:39.362345 systemd-logind[1523]: Session 10 logged out. Waiting for processes to exit. Jan 29 11:30:39.362941 systemd-logind[1523]: Removed session 10. Jan 29 11:30:44.368584 systemd[1]: Started sshd@8-139.178.70.104:22-139.178.89.65:57314.service - OpenSSH per-connection server daemon (139.178.89.65:57314). Jan 29 11:30:44.463604 sshd[5549]: Accepted publickey for core from 139.178.89.65 port 57314 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:30:44.470449 sshd-session[5549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:30:44.475551 systemd-logind[1523]: New session 11 of user core. Jan 29 11:30:44.484791 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 29 11:30:44.815434 sshd[5551]: Connection closed by 139.178.89.65 port 57314 Jan 29 11:30:44.820718 systemd[1]: sshd@8-139.178.70.104:22-139.178.89.65:57314.service: Deactivated successfully. Jan 29 11:30:44.815678 sshd-session[5549]: pam_unix(sshd:session): session closed for user core Jan 29 11:30:44.821855 systemd[1]: session-11.scope: Deactivated successfully. Jan 29 11:30:44.822725 systemd-logind[1523]: Session 11 logged out. Waiting for processes to exit. Jan 29 11:30:44.824399 systemd-logind[1523]: Removed session 11. Jan 29 11:30:49.834826 systemd[1]: Started sshd@9-139.178.70.104:22-139.178.89.65:57322.service - OpenSSH per-connection server daemon (139.178.89.65:57322). Jan 29 11:30:50.100760 sshd[5583]: Accepted publickey for core from 139.178.89.65 port 57322 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:30:50.102502 sshd-session[5583]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:30:50.106105 systemd-logind[1523]: New session 12 of user core. Jan 29 11:30:50.114801 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 29 11:30:50.272939 sshd[5585]: Connection closed by 139.178.89.65 port 57322 Jan 29 11:30:50.289247 sshd-session[5583]: pam_unix(sshd:session): session closed for user core Jan 29 11:30:50.315256 systemd[1]: sshd@9-139.178.70.104:22-139.178.89.65:57322.service: Deactivated successfully. Jan 29 11:30:50.316533 systemd[1]: session-12.scope: Deactivated successfully. Jan 29 11:30:50.317611 systemd-logind[1523]: Session 12 logged out. Waiting for processes to exit. Jan 29 11:30:50.318271 systemd-logind[1523]: Removed session 12. Jan 29 11:30:55.283533 systemd[1]: Started sshd@10-139.178.70.104:22-139.178.89.65:46946.service - OpenSSH per-connection server daemon (139.178.89.65:46946). Jan 29 11:30:55.424279 sshd[5596]: Accepted publickey for core from 139.178.89.65 port 46946 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:30:55.425389 sshd-session[5596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:30:55.428743 systemd-logind[1523]: New session 13 of user core. Jan 29 11:30:55.432771 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 29 11:30:55.709064 sshd[5598]: Connection closed by 139.178.89.65 port 46946 Jan 29 11:30:55.709364 sshd-session[5596]: pam_unix(sshd:session): session closed for user core Jan 29 11:30:55.717231 systemd[1]: Started sshd@11-139.178.70.104:22-139.178.89.65:46956.service - OpenSSH per-connection server daemon (139.178.89.65:46956). Jan 29 11:30:55.728911 systemd[1]: sshd@10-139.178.70.104:22-139.178.89.65:46946.service: Deactivated successfully. Jan 29 11:30:55.730359 systemd[1]: session-13.scope: Deactivated successfully. Jan 29 11:30:55.731039 systemd-logind[1523]: Session 13 logged out. Waiting for processes to exit. Jan 29 11:30:55.731697 systemd-logind[1523]: Removed session 13. Jan 29 11:30:55.916882 sshd[5608]: Accepted publickey for core from 139.178.89.65 port 46956 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:30:55.917726 sshd-session[5608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:30:55.920553 systemd-logind[1523]: New session 14 of user core. Jan 29 11:30:55.928773 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 29 11:30:56.743560 sshd[5612]: Connection closed by 139.178.89.65 port 46956 Jan 29 11:30:56.750016 systemd[1]: Started sshd@12-139.178.70.104:22-139.178.89.65:46964.service - OpenSSH per-connection server daemon (139.178.89.65:46964). Jan 29 11:30:56.759589 sshd-session[5608]: pam_unix(sshd:session): session closed for user core Jan 29 11:30:56.801740 systemd[1]: sshd@11-139.178.70.104:22-139.178.89.65:46956.service: Deactivated successfully. Jan 29 11:30:56.802827 systemd[1]: session-14.scope: Deactivated successfully. Jan 29 11:30:56.803632 systemd-logind[1523]: Session 14 logged out. Waiting for processes to exit. Jan 29 11:30:56.804243 systemd-logind[1523]: Removed session 14. Jan 29 11:30:56.870571 sshd[5619]: Accepted publickey for core from 139.178.89.65 port 46964 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:30:56.871407 sshd-session[5619]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:30:56.875870 systemd-logind[1523]: New session 15 of user core. Jan 29 11:30:56.882766 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 29 11:30:57.181994 sshd[5623]: Connection closed by 139.178.89.65 port 46964 Jan 29 11:30:57.182453 sshd-session[5619]: pam_unix(sshd:session): session closed for user core Jan 29 11:30:57.184590 systemd[1]: sshd@12-139.178.70.104:22-139.178.89.65:46964.service: Deactivated successfully. Jan 29 11:30:57.185701 systemd[1]: session-15.scope: Deactivated successfully. Jan 29 11:30:57.186183 systemd-logind[1523]: Session 15 logged out. Waiting for processes to exit. Jan 29 11:30:57.186728 systemd-logind[1523]: Removed session 15. Jan 29 11:31:02.195134 systemd[1]: Started sshd@13-139.178.70.104:22-139.178.89.65:52250.service - OpenSSH per-connection server daemon (139.178.89.65:52250). Jan 29 11:31:02.230899 sshd[5649]: Accepted publickey for core from 139.178.89.65 port 52250 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:02.231760 sshd-session[5649]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:02.234234 systemd-logind[1523]: New session 16 of user core. Jan 29 11:31:02.241760 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 29 11:31:02.366157 sshd[5651]: Connection closed by 139.178.89.65 port 52250 Jan 29 11:31:02.370371 sshd-session[5649]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:02.420641 systemd[1]: sshd@13-139.178.70.104:22-139.178.89.65:52250.service: Deactivated successfully. Jan 29 11:31:02.421933 systemd[1]: session-16.scope: Deactivated successfully. Jan 29 11:31:02.422518 systemd-logind[1523]: Session 16 logged out. Waiting for processes to exit. Jan 29 11:31:02.423299 systemd-logind[1523]: Removed session 16. Jan 29 11:31:07.376434 systemd[1]: Started sshd@14-139.178.70.104:22-139.178.89.65:52264.service - OpenSSH per-connection server daemon (139.178.89.65:52264). Jan 29 11:31:07.442210 sshd[5664]: Accepted publickey for core from 139.178.89.65 port 52264 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:07.442801 sshd-session[5664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:07.445226 systemd-logind[1523]: New session 17 of user core. Jan 29 11:31:07.447724 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 29 11:31:07.575262 sshd[5666]: Connection closed by 139.178.89.65 port 52264 Jan 29 11:31:07.576215 sshd-session[5664]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:07.581244 systemd[1]: sshd@14-139.178.70.104:22-139.178.89.65:52264.service: Deactivated successfully. Jan 29 11:31:07.582374 systemd[1]: session-17.scope: Deactivated successfully. Jan 29 11:31:07.582868 systemd-logind[1523]: Session 17 logged out. Waiting for processes to exit. Jan 29 11:31:07.584807 systemd[1]: Started sshd@15-139.178.70.104:22-139.178.89.65:52274.service - OpenSSH per-connection server daemon (139.178.89.65:52274). Jan 29 11:31:07.585736 systemd-logind[1523]: Removed session 17. Jan 29 11:31:07.625320 sshd[5677]: Accepted publickey for core from 139.178.89.65 port 52274 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:07.626580 sshd-session[5677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:07.629424 systemd-logind[1523]: New session 18 of user core. Jan 29 11:31:07.632708 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 29 11:31:08.430282 sshd[5679]: Connection closed by 139.178.89.65 port 52274 Jan 29 11:31:08.432797 sshd-session[5677]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:08.439793 systemd[1]: sshd@15-139.178.70.104:22-139.178.89.65:52274.service: Deactivated successfully. Jan 29 11:31:08.441901 systemd[1]: session-18.scope: Deactivated successfully. Jan 29 11:31:08.444020 systemd-logind[1523]: Session 18 logged out. Waiting for processes to exit. Jan 29 11:31:08.451901 systemd[1]: Started sshd@16-139.178.70.104:22-139.178.89.65:52278.service - OpenSSH per-connection server daemon (139.178.89.65:52278). Jan 29 11:31:08.455096 systemd-logind[1523]: Removed session 18. Jan 29 11:31:08.475125 systemd[1]: run-containerd-runc-k8s.io-2489e7eedc570faf06550b0aca6010a9aab56417644d4058fb8f13713084337b-runc.NzC2mq.mount: Deactivated successfully. Jan 29 11:31:08.574391 sshd[5688]: Accepted publickey for core from 139.178.89.65 port 52278 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:08.577253 sshd-session[5688]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:08.581558 systemd-logind[1523]: New session 19 of user core. Jan 29 11:31:08.585713 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 29 11:31:09.625246 sshd[5713]: Connection closed by 139.178.89.65 port 52278 Jan 29 11:31:09.625835 sshd-session[5688]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:09.638878 systemd[1]: Started sshd@17-139.178.70.104:22-139.178.89.65:52288.service - OpenSSH per-connection server daemon (139.178.89.65:52288). Jan 29 11:31:09.639187 systemd[1]: sshd@16-139.178.70.104:22-139.178.89.65:52278.service: Deactivated successfully. Jan 29 11:31:09.641927 systemd[1]: session-19.scope: Deactivated successfully. Jan 29 11:31:09.643539 systemd-logind[1523]: Session 19 logged out. Waiting for processes to exit. Jan 29 11:31:09.645422 systemd-logind[1523]: Removed session 19. Jan 29 11:31:09.686629 sshd[5727]: Accepted publickey for core from 139.178.89.65 port 52288 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:09.686554 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:09.690872 systemd-logind[1523]: New session 20 of user core. Jan 29 11:31:09.696179 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 29 11:31:10.261991 sshd[5732]: Connection closed by 139.178.89.65 port 52288 Jan 29 11:31:10.262219 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:10.268621 systemd[1]: sshd@17-139.178.70.104:22-139.178.89.65:52288.service: Deactivated successfully. Jan 29 11:31:10.271463 systemd[1]: session-20.scope: Deactivated successfully. Jan 29 11:31:10.273047 systemd-logind[1523]: Session 20 logged out. Waiting for processes to exit. Jan 29 11:31:10.278892 systemd[1]: Started sshd@18-139.178.70.104:22-139.178.89.65:52290.service - OpenSSH per-connection server daemon (139.178.89.65:52290). Jan 29 11:31:10.280851 systemd-logind[1523]: Removed session 20. Jan 29 11:31:10.312706 sshd[5743]: Accepted publickey for core from 139.178.89.65 port 52290 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:10.313625 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:10.317008 systemd-logind[1523]: New session 21 of user core. Jan 29 11:31:10.323735 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 29 11:31:10.424515 sshd[5745]: Connection closed by 139.178.89.65 port 52290 Jan 29 11:31:10.424212 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:10.425818 systemd[1]: sshd@18-139.178.70.104:22-139.178.89.65:52290.service: Deactivated successfully. Jan 29 11:31:10.427037 systemd[1]: session-21.scope: Deactivated successfully. Jan 29 11:31:10.428100 systemd-logind[1523]: Session 21 logged out. Waiting for processes to exit. Jan 29 11:31:10.428919 systemd-logind[1523]: Removed session 21. Jan 29 11:31:15.433861 systemd[1]: Started sshd@19-139.178.70.104:22-139.178.89.65:38724.service - OpenSSH per-connection server daemon (139.178.89.65:38724). Jan 29 11:31:15.651813 sshd[5813]: Accepted publickey for core from 139.178.89.65 port 38724 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:15.659562 sshd-session[5813]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:15.662974 systemd-logind[1523]: New session 22 of user core. Jan 29 11:31:15.668730 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 29 11:31:16.255143 sshd[5815]: Connection closed by 139.178.89.65 port 38724 Jan 29 11:31:16.255771 sshd-session[5813]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:16.257868 systemd-logind[1523]: Session 22 logged out. Waiting for processes to exit. Jan 29 11:31:16.258048 systemd[1]: sshd@19-139.178.70.104:22-139.178.89.65:38724.service: Deactivated successfully. Jan 29 11:31:16.258992 systemd[1]: session-22.scope: Deactivated successfully. Jan 29 11:31:16.259432 systemd-logind[1523]: Removed session 22. Jan 29 11:31:21.264526 systemd[1]: Started sshd@20-139.178.70.104:22-139.178.89.65:40102.service - OpenSSH per-connection server daemon (139.178.89.65:40102). Jan 29 11:31:21.306504 sshd[5826]: Accepted publickey for core from 139.178.89.65 port 40102 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:21.307723 sshd-session[5826]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:21.310425 systemd-logind[1523]: New session 23 of user core. Jan 29 11:31:21.316695 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 29 11:31:21.420070 sshd[5828]: Connection closed by 139.178.89.65 port 40102 Jan 29 11:31:21.419493 sshd-session[5826]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:21.422183 systemd[1]: sshd@20-139.178.70.104:22-139.178.89.65:40102.service: Deactivated successfully. Jan 29 11:31:21.422353 systemd-logind[1523]: Session 23 logged out. Waiting for processes to exit. Jan 29 11:31:21.424217 systemd[1]: session-23.scope: Deactivated successfully. Jan 29 11:31:21.425249 systemd-logind[1523]: Removed session 23. Jan 29 11:31:26.427538 systemd[1]: Started sshd@21-139.178.70.104:22-139.178.89.65:40106.service - OpenSSH per-connection server daemon (139.178.89.65:40106). Jan 29 11:31:26.515151 sshd[5839]: Accepted publickey for core from 139.178.89.65 port 40106 ssh2: RSA SHA256:pqivdTXUTfPw/L8mgWhZzkKRG+7xJKY2sjPKJpqIlJ0 Jan 29 11:31:26.516023 sshd-session[5839]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:31:26.518454 systemd-logind[1523]: New session 24 of user core. Jan 29 11:31:26.523714 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 29 11:31:26.684192 sshd[5841]: Connection closed by 139.178.89.65 port 40106 Jan 29 11:31:26.684643 sshd-session[5839]: pam_unix(sshd:session): session closed for user core Jan 29 11:31:26.686703 systemd[1]: sshd@21-139.178.70.104:22-139.178.89.65:40106.service: Deactivated successfully. Jan 29 11:31:26.687817 systemd[1]: session-24.scope: Deactivated successfully. Jan 29 11:31:26.688254 systemd-logind[1523]: Session 24 logged out. Waiting for processes to exit. Jan 29 11:31:26.688811 systemd-logind[1523]: Removed session 24.