Dec 13 13:19:23.955023 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:19:23.955068 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:19:23.955084 kernel: BIOS-provided physical RAM map: Dec 13 13:19:23.955093 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:19:23.955101 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 13:19:23.955109 kernel: BIOS-e820: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 13:19:23.955119 kernel: BIOS-e820: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 13:19:23.955128 kernel: BIOS-e820: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 13:19:23.955137 kernel: BIOS-e820: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 13 13:19:23.955146 kernel: BIOS-e820: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 13 13:19:23.955166 kernel: BIOS-e820: [mem 0x0000000000900000-0x000000009bd3efff] usable Dec 13 13:19:23.955180 kernel: BIOS-e820: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 13 13:19:23.955188 kernel: BIOS-e820: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 13 13:19:23.955196 kernel: BIOS-e820: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 13 13:19:23.955210 kernel: BIOS-e820: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 13 13:19:23.955220 kernel: BIOS-e820: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 13:19:23.955232 kernel: BIOS-e820: [mem 0x000000009cbff000-0x000000009ce91fff] usable Dec 13 13:19:23.955241 kernel: BIOS-e820: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Dec 13 13:19:23.955250 kernel: BIOS-e820: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Dec 13 13:19:23.955259 kernel: BIOS-e820: [mem 0x000000009ce98000-0x000000009cedbfff] usable Dec 13 13:19:23.955268 kernel: BIOS-e820: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 13 13:19:23.955277 kernel: BIOS-e820: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 13:19:23.955287 kernel: BIOS-e820: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 13:19:23.955296 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:19:23.955305 kernel: BIOS-e820: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 13 13:19:23.955314 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:19:23.955322 kernel: NX (Execute Disable) protection: active Dec 13 13:19:23.955336 kernel: APIC: Static calls initialized Dec 13 13:19:23.955345 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Dec 13 13:19:23.955355 kernel: e820: update [mem 0x9b351018-0x9b35ac57] usable ==> usable Dec 13 13:19:23.955364 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Dec 13 13:19:23.955373 kernel: e820: update [mem 0x9b314018-0x9b350e57] usable ==> usable Dec 13 13:19:23.955382 kernel: extended physical RAM map: Dec 13 13:19:23.955391 kernel: reserve setup_data: [mem 0x0000000000000000-0x000000000009ffff] usable Dec 13 13:19:23.955400 kernel: reserve setup_data: [mem 0x0000000000100000-0x00000000007fffff] usable Dec 13 13:19:23.955410 kernel: reserve setup_data: [mem 0x0000000000800000-0x0000000000807fff] ACPI NVS Dec 13 13:19:23.955419 kernel: reserve setup_data: [mem 0x0000000000808000-0x000000000080afff] usable Dec 13 13:19:23.955428 kernel: reserve setup_data: [mem 0x000000000080b000-0x000000000080bfff] ACPI NVS Dec 13 13:19:23.955437 kernel: reserve setup_data: [mem 0x000000000080c000-0x0000000000810fff] usable Dec 13 13:19:23.955451 kernel: reserve setup_data: [mem 0x0000000000811000-0x00000000008fffff] ACPI NVS Dec 13 13:19:23.955514 kernel: reserve setup_data: [mem 0x0000000000900000-0x000000009b314017] usable Dec 13 13:19:23.955522 kernel: reserve setup_data: [mem 0x000000009b314018-0x000000009b350e57] usable Dec 13 13:19:23.955529 kernel: reserve setup_data: [mem 0x000000009b350e58-0x000000009b351017] usable Dec 13 13:19:23.955537 kernel: reserve setup_data: [mem 0x000000009b351018-0x000000009b35ac57] usable Dec 13 13:19:23.955544 kernel: reserve setup_data: [mem 0x000000009b35ac58-0x000000009bd3efff] usable Dec 13 13:19:23.955555 kernel: reserve setup_data: [mem 0x000000009bd3f000-0x000000009bdfffff] reserved Dec 13 13:19:23.955562 kernel: reserve setup_data: [mem 0x000000009be00000-0x000000009c8ecfff] usable Dec 13 13:19:23.955569 kernel: reserve setup_data: [mem 0x000000009c8ed000-0x000000009cb6cfff] reserved Dec 13 13:19:23.955577 kernel: reserve setup_data: [mem 0x000000009cb6d000-0x000000009cb7efff] ACPI data Dec 13 13:19:23.955584 kernel: reserve setup_data: [mem 0x000000009cb7f000-0x000000009cbfefff] ACPI NVS Dec 13 13:19:23.955591 kernel: reserve setup_data: [mem 0x000000009cbff000-0x000000009ce91fff] usable Dec 13 13:19:23.955599 kernel: reserve setup_data: [mem 0x000000009ce92000-0x000000009ce95fff] reserved Dec 13 13:19:23.955606 kernel: reserve setup_data: [mem 0x000000009ce96000-0x000000009ce97fff] ACPI NVS Dec 13 13:19:23.955613 kernel: reserve setup_data: [mem 0x000000009ce98000-0x000000009cedbfff] usable Dec 13 13:19:23.955623 kernel: reserve setup_data: [mem 0x000000009cedc000-0x000000009cf5ffff] reserved Dec 13 13:19:23.955631 kernel: reserve setup_data: [mem 0x000000009cf60000-0x000000009cffffff] ACPI NVS Dec 13 13:19:23.955638 kernel: reserve setup_data: [mem 0x00000000e0000000-0x00000000efffffff] reserved Dec 13 13:19:23.955648 kernel: reserve setup_data: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:19:23.955655 kernel: reserve setup_data: [mem 0x00000000ffc00000-0x00000000ffffffff] reserved Dec 13 13:19:23.955665 kernel: reserve setup_data: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Dec 13 13:19:23.955672 kernel: efi: EFI v2.7 by EDK II Dec 13 13:19:23.955680 kernel: efi: SMBIOS=0x9c988000 ACPI=0x9cb7e000 ACPI 2.0=0x9cb7e014 MEMATTR=0x9ba0d198 RNG=0x9cb73018 Dec 13 13:19:23.955687 kernel: random: crng init done Dec 13 13:19:23.955695 kernel: efi: Remove mem142: MMIO range=[0xffc00000-0xffffffff] (4MB) from e820 map Dec 13 13:19:23.955702 kernel: e820: remove [mem 0xffc00000-0xffffffff] reserved Dec 13 13:19:23.955712 kernel: secureboot: Secure boot disabled Dec 13 13:19:23.955722 kernel: SMBIOS 2.8 present. Dec 13 13:19:23.955730 kernel: DMI: QEMU Standard PC (Q35 + ICH9, 2009), BIOS unknown 02/02/2022 Dec 13 13:19:23.955737 kernel: Hypervisor detected: KVM Dec 13 13:19:23.955744 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:19:23.955752 kernel: kvm-clock: using sched offset of 3795254872 cycles Dec 13 13:19:23.955760 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:19:23.955767 kernel: tsc: Detected 2794.748 MHz processor Dec 13 13:19:23.955791 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:19:23.955799 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:19:23.955807 kernel: last_pfn = 0x9cedc max_arch_pfn = 0x400000000 Dec 13 13:19:23.955818 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs Dec 13 13:19:23.955828 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:19:23.955846 kernel: Using GB pages for direct mapping Dec 13 13:19:23.955856 kernel: ACPI: Early table checksum verification disabled Dec 13 13:19:23.955865 kernel: ACPI: RSDP 0x000000009CB7E014 000024 (v02 BOCHS ) Dec 13 13:19:23.955874 kernel: ACPI: XSDT 0x000000009CB7D0E8 000054 (v01 BOCHS BXPC 00000001 01000013) Dec 13 13:19:23.955883 kernel: ACPI: FACP 0x000000009CB79000 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:19:23.955891 kernel: ACPI: DSDT 0x000000009CB7A000 0021A8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:19:23.955898 kernel: ACPI: FACS 0x000000009CBDD000 000040 Dec 13 13:19:23.955910 kernel: ACPI: APIC 0x000000009CB78000 000090 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:19:23.955917 kernel: ACPI: HPET 0x000000009CB77000 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:19:23.955925 kernel: ACPI: MCFG 0x000000009CB76000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:19:23.955933 kernel: ACPI: WAET 0x000000009CB75000 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:19:23.955940 kernel: ACPI: BGRT 0x000000009CB74000 000038 (v01 INTEL EDK2 00000002 01000013) Dec 13 13:19:23.955950 kernel: ACPI: Reserving FACP table memory at [mem 0x9cb79000-0x9cb790f3] Dec 13 13:19:23.955958 kernel: ACPI: Reserving DSDT table memory at [mem 0x9cb7a000-0x9cb7c1a7] Dec 13 13:19:23.955965 kernel: ACPI: Reserving FACS table memory at [mem 0x9cbdd000-0x9cbdd03f] Dec 13 13:19:23.955973 kernel: ACPI: Reserving APIC table memory at [mem 0x9cb78000-0x9cb7808f] Dec 13 13:19:23.955984 kernel: ACPI: Reserving HPET table memory at [mem 0x9cb77000-0x9cb77037] Dec 13 13:19:23.955994 kernel: ACPI: Reserving MCFG table memory at [mem 0x9cb76000-0x9cb7603b] Dec 13 13:19:23.956004 kernel: ACPI: Reserving WAET table memory at [mem 0x9cb75000-0x9cb75027] Dec 13 13:19:23.956014 kernel: ACPI: Reserving BGRT table memory at [mem 0x9cb74000-0x9cb74037] Dec 13 13:19:23.956022 kernel: No NUMA configuration found Dec 13 13:19:23.956030 kernel: Faking a node at [mem 0x0000000000000000-0x000000009cedbfff] Dec 13 13:19:23.956037 kernel: NODE_DATA(0) allocated [mem 0x9ce3a000-0x9ce3ffff] Dec 13 13:19:23.956045 kernel: Zone ranges: Dec 13 13:19:23.956052 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:19:23.956063 kernel: DMA32 [mem 0x0000000001000000-0x000000009cedbfff] Dec 13 13:19:23.956074 kernel: Normal empty Dec 13 13:19:23.956082 kernel: Movable zone start for each node Dec 13 13:19:23.956089 kernel: Early memory node ranges Dec 13 13:19:23.956097 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Dec 13 13:19:23.956104 kernel: node 0: [mem 0x0000000000100000-0x00000000007fffff] Dec 13 13:19:23.956112 kernel: node 0: [mem 0x0000000000808000-0x000000000080afff] Dec 13 13:19:23.956119 kernel: node 0: [mem 0x000000000080c000-0x0000000000810fff] Dec 13 13:19:23.956127 kernel: node 0: [mem 0x0000000000900000-0x000000009bd3efff] Dec 13 13:19:23.956137 kernel: node 0: [mem 0x000000009be00000-0x000000009c8ecfff] Dec 13 13:19:23.956144 kernel: node 0: [mem 0x000000009cbff000-0x000000009ce91fff] Dec 13 13:19:23.956151 kernel: node 0: [mem 0x000000009ce98000-0x000000009cedbfff] Dec 13 13:19:23.956159 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000009cedbfff] Dec 13 13:19:23.956166 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:19:23.956174 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Dec 13 13:19:23.956190 kernel: On node 0, zone DMA: 8 pages in unavailable ranges Dec 13 13:19:23.956200 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:19:23.956207 kernel: On node 0, zone DMA: 239 pages in unavailable ranges Dec 13 13:19:23.956215 kernel: On node 0, zone DMA32: 193 pages in unavailable ranges Dec 13 13:19:23.956223 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Dec 13 13:19:23.956234 kernel: On node 0, zone DMA32: 6 pages in unavailable ranges Dec 13 13:19:23.956244 kernel: On node 0, zone DMA32: 12580 pages in unavailable ranges Dec 13 13:19:23.956252 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:19:23.956260 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:19:23.956268 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:19:23.956275 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:19:23.956286 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:19:23.956294 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:19:23.956301 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:19:23.956309 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:19:23.956317 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:19:23.956325 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Dec 13 13:19:23.956332 kernel: TSC deadline timer available Dec 13 13:19:23.956340 kernel: smpboot: Allowing 4 CPUs, 0 hotplug CPUs Dec 13 13:19:23.956348 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:19:23.956358 kernel: kvm-guest: KVM setup pv remote TLB flush Dec 13 13:19:23.956366 kernel: kvm-guest: setup PV sched yield Dec 13 13:19:23.956381 kernel: [mem 0x9d000000-0xdfffffff] available for PCI devices Dec 13 13:19:23.956403 kernel: Booting paravirtualized kernel on KVM Dec 13 13:19:23.956420 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:19:23.956434 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:4 nr_cpu_ids:4 nr_node_ids:1 Dec 13 13:19:23.956451 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u524288 Dec 13 13:19:23.956461 kernel: pcpu-alloc: s197032 r8192 d32344 u524288 alloc=1*2097152 Dec 13 13:19:23.956479 kernel: pcpu-alloc: [0] 0 1 2 3 Dec 13 13:19:23.956514 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:19:23.956533 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:19:23.956557 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:19:23.956572 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:19:23.956582 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Dec 13 13:19:23.956593 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:19:23.956603 kernel: Fallback order for Node 0: 0 Dec 13 13:19:23.956614 kernel: Built 1 zonelists, mobility grouping on. Total pages: 629460 Dec 13 13:19:23.956624 kernel: Policy zone: DMA32 Dec 13 13:19:23.956640 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:19:23.956653 kernel: Memory: 2387720K/2565800K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 177824K reserved, 0K cma-reserved) Dec 13 13:19:23.956664 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Dec 13 13:19:23.956675 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:19:23.956685 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:19:23.956698 kernel: Dynamic Preempt: voluntary Dec 13 13:19:23.956708 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:19:23.956727 kernel: rcu: RCU event tracing is enabled. Dec 13 13:19:23.956741 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Dec 13 13:19:23.956751 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:19:23.956761 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:19:23.956772 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:19:23.956812 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:19:23.956822 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Dec 13 13:19:23.956832 kernel: NR_IRQS: 33024, nr_irqs: 456, preallocated irqs: 16 Dec 13 13:19:23.956842 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:19:23.956850 kernel: Console: colour dummy device 80x25 Dec 13 13:19:23.956858 kernel: printk: console [ttyS0] enabled Dec 13 13:19:23.956911 kernel: ACPI: Core revision 20230628 Dec 13 13:19:23.956921 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Dec 13 13:19:23.956931 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:19:23.956939 kernel: x2apic enabled Dec 13 13:19:23.956947 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:19:23.956959 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Dec 13 13:19:23.956967 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Dec 13 13:19:23.956976 kernel: kvm-guest: setup PV IPIs Dec 13 13:19:23.956985 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Dec 13 13:19:23.957000 kernel: tsc: Marking TSC unstable due to TSCs unsynchronized Dec 13 13:19:23.957011 kernel: Calibrating delay loop (skipped) preset value.. 5589.49 BogoMIPS (lpj=2794748) Dec 13 13:19:23.957021 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:19:23.957029 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Dec 13 13:19:23.957036 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Dec 13 13:19:23.957044 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:19:23.957052 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:19:23.957060 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:19:23.957069 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:19:23.957083 kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls Dec 13 13:19:23.957093 kernel: RETBleed: Mitigation: untrained return thunk Dec 13 13:19:23.957104 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:19:23.957114 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:19:23.957124 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Dec 13 13:19:23.957139 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Dec 13 13:19:23.957149 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Dec 13 13:19:23.957160 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:19:23.957174 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:19:23.957185 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:19:23.957195 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:19:23.957206 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format. Dec 13 13:19:23.957216 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:19:23.957227 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:19:23.957237 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:19:23.957247 kernel: landlock: Up and running. Dec 13 13:19:23.957257 kernel: SELinux: Initializing. Dec 13 13:19:23.957272 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:19:23.957282 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Dec 13 13:19:23.957293 kernel: smpboot: CPU0: AMD EPYC 7402P 24-Core Processor (family: 0x17, model: 0x31, stepping: 0x0) Dec 13 13:19:23.957303 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:19:23.957314 kernel: RCU Tasks Rude: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:19:23.957324 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Dec 13 13:19:23.957335 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Dec 13 13:19:23.957352 kernel: ... version: 0 Dec 13 13:19:23.957362 kernel: ... bit width: 48 Dec 13 13:19:23.957377 kernel: ... generic registers: 6 Dec 13 13:19:23.957388 kernel: ... value mask: 0000ffffffffffff Dec 13 13:19:23.957398 kernel: ... max period: 00007fffffffffff Dec 13 13:19:23.957409 kernel: ... fixed-purpose events: 0 Dec 13 13:19:23.957419 kernel: ... event mask: 000000000000003f Dec 13 13:19:23.957429 kernel: signal: max sigframe size: 1776 Dec 13 13:19:23.957439 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:19:23.957450 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:19:23.957461 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:19:23.957485 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:19:23.957496 kernel: .... node #0, CPUs: #1 #2 #3 Dec 13 13:19:23.957506 kernel: smp: Brought up 1 node, 4 CPUs Dec 13 13:19:23.957517 kernel: smpboot: Max logical packages: 1 Dec 13 13:19:23.957527 kernel: smpboot: Total of 4 processors activated (22357.98 BogoMIPS) Dec 13 13:19:23.957537 kernel: devtmpfs: initialized Dec 13 13:19:23.957548 kernel: x86/mm: Memory block size: 128MB Dec 13 13:19:23.957559 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00800000-0x00807fff] (32768 bytes) Dec 13 13:19:23.957569 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x0080b000-0x0080bfff] (4096 bytes) Dec 13 13:19:23.957584 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x00811000-0x008fffff] (978944 bytes) Dec 13 13:19:23.957595 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cb7f000-0x9cbfefff] (524288 bytes) Dec 13 13:19:23.957605 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9ce96000-0x9ce97fff] (8192 bytes) Dec 13 13:19:23.957615 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x9cf60000-0x9cffffff] (655360 bytes) Dec 13 13:19:23.957633 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:19:23.957669 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Dec 13 13:19:23.957681 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:19:23.957691 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:19:23.957708 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:19:23.957729 kernel: audit: type=2000 audit(1734095963.125:1): state=initialized audit_enabled=0 res=1 Dec 13 13:19:23.957738 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:19:23.957750 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:19:23.957767 kernel: cpuidle: using governor menu Dec 13 13:19:23.957835 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:19:23.957857 kernel: dca service started, version 1.12.1 Dec 13 13:19:23.957873 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xe0000000-0xefffffff] (base 0xe0000000) Dec 13 13:19:23.957883 kernel: PCI: Using configuration type 1 for base access Dec 13 13:19:23.957902 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:19:23.957924 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:19:23.957936 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:19:23.957953 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:19:23.957969 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:19:23.957986 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:19:23.958008 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:19:23.958028 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:19:23.958047 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:19:23.958069 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:19:23.958101 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:19:23.958122 kernel: ACPI: Interpreter enabled Dec 13 13:19:23.958140 kernel: ACPI: PM: (supports S0 S3 S5) Dec 13 13:19:23.958160 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:19:23.958179 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:19:23.958201 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:19:23.958219 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:19:23.958237 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:19:23.958556 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:19:23.958747 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Dec 13 13:19:23.958913 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Dec 13 13:19:23.958945 kernel: PCI host bridge to bus 0000:00 Dec 13 13:19:23.959190 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:19:23.959360 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:19:23.959525 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:19:23.959672 kernel: pci_bus 0000:00: root bus resource [mem 0x9d000000-0xdfffffff window] Dec 13 13:19:23.959829 kernel: pci_bus 0000:00: root bus resource [mem 0xf0000000-0xfebfffff window] Dec 13 13:19:23.959963 kernel: pci_bus 0000:00: root bus resource [mem 0x380000000000-0x3807ffffffff window] Dec 13 13:19:23.960096 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:19:23.960279 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:19:23.960500 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 Dec 13 13:19:23.960676 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xc0000000-0xc0ffffff pref] Dec 13 13:19:23.960849 kernel: pci 0000:00:01.0: reg 0x18: [mem 0xc1044000-0xc1044fff] Dec 13 13:19:23.961011 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xffff0000-0xffffffff pref] Dec 13 13:19:23.961184 kernel: pci 0000:00:01.0: BAR 0: assigned to efifb Dec 13 13:19:23.961355 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:19:23.961569 kernel: pci 0000:00:02.0: [1af4:1005] type 00 class 0x00ff00 Dec 13 13:19:23.961712 kernel: pci 0000:00:02.0: reg 0x10: [io 0x6100-0x611f] Dec 13 13:19:23.962223 kernel: pci 0000:00:02.0: reg 0x14: [mem 0xc1043000-0xc1043fff] Dec 13 13:19:23.962391 kernel: pci 0000:00:02.0: reg 0x20: [mem 0x380000000000-0x380000003fff 64bit pref] Dec 13 13:19:23.962615 kernel: pci 0000:00:03.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:19:23.962755 kernel: pci 0000:00:03.0: reg 0x10: [io 0x6000-0x607f] Dec 13 13:19:23.962964 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc1042000-0xc1042fff] Dec 13 13:19:23.963166 kernel: pci 0000:00:03.0: reg 0x20: [mem 0x380000004000-0x380000007fff 64bit pref] Dec 13 13:19:23.963368 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:19:23.963532 kernel: pci 0000:00:04.0: reg 0x10: [io 0x60e0-0x60ff] Dec 13 13:19:23.963663 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc1041000-0xc1041fff] Dec 13 13:19:23.963882 kernel: pci 0000:00:04.0: reg 0x20: [mem 0x380000008000-0x38000000bfff 64bit pref] Dec 13 13:19:23.964056 kernel: pci 0000:00:04.0: reg 0x30: [mem 0xfffc0000-0xffffffff pref] Dec 13 13:19:23.964249 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:19:23.964489 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:19:23.964647 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:19:23.964893 kernel: pci 0000:00:1f.2: reg 0x20: [io 0x60c0-0x60df] Dec 13 13:19:23.965074 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xc1040000-0xc1040fff] Dec 13 13:19:23.965300 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:19:23.965450 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x6080-0x60bf] Dec 13 13:19:23.965463 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:19:23.965480 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:19:23.965489 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:19:23.965503 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:19:23.965511 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:19:23.965519 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:19:23.965527 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:19:23.965535 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:19:23.965543 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:19:23.965551 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:19:23.965559 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:19:23.965567 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:19:23.965579 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:19:23.965587 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:19:23.965595 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:19:23.965603 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:19:23.965611 kernel: iommu: Default domain type: Translated Dec 13 13:19:23.965619 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:19:23.965627 kernel: efivars: Registered efivars operations Dec 13 13:19:23.965635 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:19:23.965643 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:19:23.965659 kernel: e820: reserve RAM buffer [mem 0x0080b000-0x008fffff] Dec 13 13:19:23.965669 kernel: e820: reserve RAM buffer [mem 0x00811000-0x008fffff] Dec 13 13:19:23.965677 kernel: e820: reserve RAM buffer [mem 0x9b314018-0x9bffffff] Dec 13 13:19:23.965694 kernel: e820: reserve RAM buffer [mem 0x9b351018-0x9bffffff] Dec 13 13:19:23.965709 kernel: e820: reserve RAM buffer [mem 0x9bd3f000-0x9bffffff] Dec 13 13:19:23.965721 kernel: e820: reserve RAM buffer [mem 0x9c8ed000-0x9fffffff] Dec 13 13:19:23.965729 kernel: e820: reserve RAM buffer [mem 0x9ce92000-0x9fffffff] Dec 13 13:19:23.965737 kernel: e820: reserve RAM buffer [mem 0x9cedc000-0x9fffffff] Dec 13 13:19:23.965897 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:19:23.966065 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:19:23.966232 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:19:23.966247 kernel: vgaarb: loaded Dec 13 13:19:23.966258 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Dec 13 13:19:23.966269 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Dec 13 13:19:23.966280 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:19:23.966288 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:19:23.966296 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:19:23.966318 kernel: pnp: PnP ACPI init Dec 13 13:19:23.966631 kernel: system 00:05: [mem 0xe0000000-0xefffffff window] has been reserved Dec 13 13:19:23.966655 kernel: pnp: PnP ACPI: found 6 devices Dec 13 13:19:23.966667 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:19:23.966678 kernel: NET: Registered PF_INET protocol family Dec 13 13:19:23.966719 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Dec 13 13:19:23.966734 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Dec 13 13:19:23.966746 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:19:23.966761 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:19:23.966773 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Dec 13 13:19:23.966821 kernel: TCP: Hash tables configured (established 32768 bind 32768) Dec 13 13:19:23.966832 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:19:23.966843 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Dec 13 13:19:23.966853 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:19:23.966865 kernel: NET: Registered PF_XDP protocol family Dec 13 13:19:23.967049 kernel: pci 0000:00:04.0: can't claim BAR 6 [mem 0xfffc0000-0xffffffff pref]: no compatible bridge window Dec 13 13:19:23.967203 kernel: pci 0000:00:04.0: BAR 6: assigned [mem 0x9d000000-0x9d03ffff pref] Dec 13 13:19:23.967346 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:19:23.967497 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:19:23.967647 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:19:23.967898 kernel: pci_bus 0000:00: resource 7 [mem 0x9d000000-0xdfffffff window] Dec 13 13:19:23.968056 kernel: pci_bus 0000:00: resource 8 [mem 0xf0000000-0xfebfffff window] Dec 13 13:19:23.968193 kernel: pci_bus 0000:00: resource 9 [mem 0x380000000000-0x3807ffffffff window] Dec 13 13:19:23.968206 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:19:23.968215 kernel: Initialise system trusted keyrings Dec 13 13:19:23.968229 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Dec 13 13:19:23.968238 kernel: Key type asymmetric registered Dec 13 13:19:23.968246 kernel: Asymmetric key parser 'x509' registered Dec 13 13:19:23.968255 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:19:23.968263 kernel: io scheduler mq-deadline registered Dec 13 13:19:23.968272 kernel: io scheduler kyber registered Dec 13 13:19:23.968286 kernel: io scheduler bfq registered Dec 13 13:19:23.968297 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:19:23.968309 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:19:23.968325 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:19:23.968342 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:19:23.968353 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:19:23.968364 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:19:23.968375 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:19:23.968393 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:19:23.968404 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:19:23.968607 kernel: rtc_cmos 00:04: RTC can wake from S4 Dec 13 13:19:23.968801 kernel: rtc_cmos 00:04: registered as rtc0 Dec 13 13:19:23.968818 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Dec 13 13:19:23.969010 kernel: rtc_cmos 00:04: setting system clock to 2024-12-13T13:19:23 UTC (1734095963) Dec 13 13:19:23.969143 kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:19:23.969155 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Dec 13 13:19:23.969169 kernel: efifb: probing for efifb Dec 13 13:19:23.969178 kernel: efifb: framebuffer at 0xc0000000, using 4000k, total 4000k Dec 13 13:19:23.969186 kernel: efifb: mode is 1280x800x32, linelength=5120, pages=1 Dec 13 13:19:23.969195 kernel: efifb: scrolling: redraw Dec 13 13:19:23.969203 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Dec 13 13:19:23.969211 kernel: Console: switching to colour frame buffer device 160x50 Dec 13 13:19:23.969220 kernel: fb0: EFI VGA frame buffer device Dec 13 13:19:23.969228 kernel: pstore: Using crash dump compression: deflate Dec 13 13:19:23.969236 kernel: pstore: Registered efi_pstore as persistent store backend Dec 13 13:19:23.969248 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:19:23.969256 kernel: Segment Routing with IPv6 Dec 13 13:19:23.969264 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:19:23.969273 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:19:23.969281 kernel: Key type dns_resolver registered Dec 13 13:19:23.969289 kernel: IPI shorthand broadcast: enabled Dec 13 13:19:23.969298 kernel: sched_clock: Marking stable (1214002876, 189128515)->(1440127273, -36995882) Dec 13 13:19:23.969306 kernel: registered taskstats version 1 Dec 13 13:19:23.969314 kernel: Loading compiled-in X.509 certificates Dec 13 13:19:23.969323 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:19:23.969335 kernel: Key type .fscrypt registered Dec 13 13:19:23.969343 kernel: Key type fscrypt-provisioning registered Dec 13 13:19:23.969351 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:19:23.969360 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:19:23.969368 kernel: ima: No architecture policies found Dec 13 13:19:23.969376 kernel: clk: Disabling unused clocks Dec 13 13:19:23.969384 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:19:23.969393 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:19:23.969405 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:19:23.969413 kernel: Run /init as init process Dec 13 13:19:23.969421 kernel: with arguments: Dec 13 13:19:23.969430 kernel: /init Dec 13 13:19:23.969438 kernel: with environment: Dec 13 13:19:23.969446 kernel: HOME=/ Dec 13 13:19:23.969454 kernel: TERM=linux Dec 13 13:19:23.969463 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:19:23.969495 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:19:23.969510 systemd[1]: Detected virtualization kvm. Dec 13 13:19:23.969520 systemd[1]: Detected architecture x86-64. Dec 13 13:19:23.969529 systemd[1]: Running in initrd. Dec 13 13:19:23.969537 systemd[1]: No hostname configured, using default hostname. Dec 13 13:19:23.969546 systemd[1]: Hostname set to . Dec 13 13:19:23.969555 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:19:23.969564 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:19:23.969576 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:19:23.969585 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:19:23.969595 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:19:23.969604 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:19:23.969614 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:19:23.969625 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:19:23.969640 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:19:23.969657 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:19:23.969669 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:19:23.969681 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:19:23.969693 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:19:23.969706 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:19:23.969718 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:19:23.969729 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:19:23.969741 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:19:23.969753 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:19:23.969769 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:19:23.969800 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:19:23.969811 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:19:23.969823 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:19:23.969835 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:19:23.969847 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:19:23.969859 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:19:23.969871 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:19:23.969887 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:19:23.969899 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:19:23.969911 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:19:23.969923 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:19:23.969935 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:19:23.969946 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:19:23.969958 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:19:23.969973 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:19:23.970029 systemd-journald[194]: Collecting audit messages is disabled. Dec 13 13:19:23.970056 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:19:23.970066 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:19:23.970075 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:19:23.970087 systemd-journald[194]: Journal started Dec 13 13:19:23.970107 systemd-journald[194]: Runtime Journal (/run/log/journal/d833bb0f5bb4412fba98951be0803e95) is 6.0M, max 48.2M, 42.2M free. Dec 13 13:19:23.952289 systemd-modules-load[195]: Inserted module 'overlay' Dec 13 13:19:23.972241 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:19:23.972678 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:19:23.981366 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:19:23.986879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:19:23.992801 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:19:23.995481 systemd-modules-load[195]: Inserted module 'br_netfilter' Dec 13 13:19:23.997527 kernel: Bridge firewalling registered Dec 13 13:19:23.998369 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:19:24.001553 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:19:24.003017 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:19:24.003567 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:19:24.008965 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:19:24.015746 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:19:24.018667 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:19:24.024014 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:19:24.026286 dracut-cmdline[225]: dracut-dracut-053 Dec 13 13:19:24.030291 dracut-cmdline[225]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:19:24.076312 systemd-resolved[235]: Positive Trust Anchors: Dec 13 13:19:24.076336 systemd-resolved[235]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:19:24.076366 systemd-resolved[235]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:19:24.081563 systemd-resolved[235]: Defaulting to hostname 'linux'. Dec 13 13:19:24.084031 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:19:24.086060 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:19:24.146820 kernel: SCSI subsystem initialized Dec 13 13:19:24.155815 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:19:24.166837 kernel: iscsi: registered transport (tcp) Dec 13 13:19:24.187848 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:19:24.187938 kernel: QLogic iSCSI HBA Driver Dec 13 13:19:24.260168 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:19:24.268946 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:19:24.294815 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:19:24.294882 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:19:24.296470 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:19:24.344833 kernel: raid6: avx2x4 gen() 29211 MB/s Dec 13 13:19:24.361832 kernel: raid6: avx2x2 gen() 30533 MB/s Dec 13 13:19:24.378945 kernel: raid6: avx2x1 gen() 24890 MB/s Dec 13 13:19:24.379044 kernel: raid6: using algorithm avx2x2 gen() 30533 MB/s Dec 13 13:19:24.396936 kernel: raid6: .... xor() 19770 MB/s, rmw enabled Dec 13 13:19:24.396982 kernel: raid6: using avx2x2 recovery algorithm Dec 13 13:19:24.419833 kernel: xor: automatically using best checksumming function avx Dec 13 13:19:24.580836 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:19:24.597929 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:19:24.607999 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:19:24.620953 systemd-udevd[413]: Using default interface naming scheme 'v255'. Dec 13 13:19:24.626325 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:19:24.638027 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:19:24.656677 dracut-pre-trigger[418]: rd.md=0: removing MD RAID activation Dec 13 13:19:24.699530 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:19:24.710985 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:19:24.781323 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:19:24.796265 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:19:24.809854 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:19:24.813851 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:19:24.817130 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:19:24.820008 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:19:24.824824 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Dec 13 13:19:24.859247 kernel: virtio_blk virtio1: [vda] 19775488 512-byte logical blocks (10.1 GB/9.43 GiB) Dec 13 13:19:24.859641 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:19:24.859655 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:19:24.859666 kernel: GPT:9289727 != 19775487 Dec 13 13:19:24.859677 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:19:24.859693 kernel: GPT:9289727 != 19775487 Dec 13 13:19:24.859703 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:19:24.859714 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:19:24.859724 kernel: libata version 3.00 loaded. Dec 13 13:19:24.859738 kernel: AVX2 version of gcm_enc/dec engaged. Dec 13 13:19:24.859749 kernel: AES CTR mode by8 optimization enabled Dec 13 13:19:24.833367 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:19:24.851042 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:19:24.868632 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:19:24.898631 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:19:24.898663 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:19:24.898876 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:19:24.899038 kernel: scsi host0: ahci Dec 13 13:19:24.899267 kernel: scsi host1: ahci Dec 13 13:19:24.899482 kernel: scsi host2: ahci Dec 13 13:19:24.899690 kernel: scsi host3: ahci Dec 13 13:19:24.899923 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (459) Dec 13 13:19:24.899948 kernel: scsi host4: ahci Dec 13 13:19:24.900151 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (474) Dec 13 13:19:24.900168 kernel: scsi host5: ahci Dec 13 13:19:24.900397 kernel: ata1: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040100 irq 34 Dec 13 13:19:24.900411 kernel: ata2: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040180 irq 34 Dec 13 13:19:24.900422 kernel: ata3: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040200 irq 34 Dec 13 13:19:24.900432 kernel: ata4: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040280 irq 34 Dec 13 13:19:24.900472 kernel: ata5: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040300 irq 34 Dec 13 13:19:24.900486 kernel: ata6: SATA max UDMA/133 abar m4096@0xc1040000 port 0xc1040380 irq 34 Dec 13 13:19:24.872938 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:19:24.873106 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:19:24.875221 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:19:24.876680 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:19:24.876919 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:19:24.879870 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:19:24.890126 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:19:24.912194 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:19:24.913788 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:19:24.926465 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:19:24.931762 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:19:24.933277 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:19:24.944662 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:19:24.953971 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:19:24.956239 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:19:24.978957 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:19:24.992908 disk-uuid[557]: Primary Header is updated. Dec 13 13:19:24.992908 disk-uuid[557]: Secondary Entries is updated. Dec 13 13:19:24.992908 disk-uuid[557]: Secondary Header is updated. Dec 13 13:19:24.996836 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:19:25.001809 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:19:25.209822 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:19:25.209901 kernel: ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) Dec 13 13:19:25.211268 kernel: ata3.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100 Dec 13 13:19:25.211293 kernel: ata3.00: applying bridge limits Dec 13 13:19:25.212411 kernel: ata3.00: configured for UDMA/100 Dec 13 13:19:25.212839 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:19:25.213838 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:19:25.214817 kernel: scsi 2:0:0:0: CD-ROM QEMU QEMU DVD-ROM 2.5+ PQ: 0 ANSI: 5 Dec 13 13:19:25.219827 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:19:25.219861 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:19:25.270861 kernel: sr 2:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray Dec 13 13:19:25.284636 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Dec 13 13:19:25.284664 kernel: sr 2:0:0:0: Attached scsi CD-ROM sr0 Dec 13 13:19:26.002603 disk-uuid[566]: The operation has completed successfully. Dec 13 13:19:26.004746 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:19:26.036049 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:19:26.036182 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:19:26.064199 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:19:26.071096 sh[593]: Success Dec 13 13:19:26.086879 kernel: device-mapper: verity: sha256 using implementation "sha256-ni" Dec 13 13:19:26.129572 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:19:26.140792 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:19:26.144070 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:19:26.156003 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:19:26.156055 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:19:26.156074 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:19:26.157016 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:19:26.158356 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:19:26.163506 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:19:26.165045 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:19:26.171991 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:19:26.174148 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:19:26.184452 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:19:26.184491 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:19:26.184509 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:19:26.187806 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:19:26.198826 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:19:26.201607 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:19:26.211199 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:19:26.215972 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:19:26.462732 ignition[686]: Ignition 2.20.0 Dec 13 13:19:26.462746 ignition[686]: Stage: fetch-offline Dec 13 13:19:26.462841 ignition[686]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:19:26.462853 ignition[686]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:19:26.466141 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:19:26.462965 ignition[686]: parsed url from cmdline: "" Dec 13 13:19:26.462969 ignition[686]: no config URL provided Dec 13 13:19:26.462975 ignition[686]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:19:26.462984 ignition[686]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:19:26.463023 ignition[686]: op(1): [started] loading QEMU firmware config module Dec 13 13:19:26.475182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:19:26.463028 ignition[686]: op(1): executing: "modprobe" "qemu_fw_cfg" Dec 13 13:19:26.472242 ignition[686]: op(1): [finished] loading QEMU firmware config module Dec 13 13:19:26.472718 ignition[686]: parsing config with SHA512: 9a9949feaf56cacab36b2ae8c6d7440345e36f90547e30a5d658e21f2b5cb23b68c75a341dde68f3c0debb3510f6b7fec8d7801e03e63567b4f70aa190f71cb7 Dec 13 13:19:26.481208 unknown[686]: fetched base config from "system" Dec 13 13:19:26.481222 unknown[686]: fetched user config from "qemu" Dec 13 13:19:26.482992 ignition[686]: fetch-offline: fetch-offline passed Dec 13 13:19:26.483894 ignition[686]: Ignition finished successfully Dec 13 13:19:26.486137 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:19:26.512319 systemd-networkd[783]: lo: Link UP Dec 13 13:19:26.512330 systemd-networkd[783]: lo: Gained carrier Dec 13 13:19:26.516807 systemd-networkd[783]: Enumeration completed Dec 13 13:19:26.517026 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:19:26.517424 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:19:26.517429 systemd-networkd[783]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:19:26.518793 systemd-networkd[783]: eth0: Link UP Dec 13 13:19:26.518798 systemd-networkd[783]: eth0: Gained carrier Dec 13 13:19:26.518805 systemd-networkd[783]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:19:26.519598 systemd[1]: Reached target network.target - Network. Dec 13 13:19:26.521797 systemd[1]: ignition-fetch.service - Ignition (fetch) was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Dec 13 13:19:26.530930 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:19:26.538827 systemd-networkd[783]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:19:26.555910 ignition[786]: Ignition 2.20.0 Dec 13 13:19:26.555925 ignition[786]: Stage: kargs Dec 13 13:19:26.556146 ignition[786]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:19:26.556162 ignition[786]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:19:26.557095 ignition[786]: kargs: kargs passed Dec 13 13:19:26.557154 ignition[786]: Ignition finished successfully Dec 13 13:19:26.560877 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:19:26.569033 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:19:26.591479 ignition[795]: Ignition 2.20.0 Dec 13 13:19:26.591493 ignition[795]: Stage: disks Dec 13 13:19:26.591697 ignition[795]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:19:26.591712 ignition[795]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:19:26.595030 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:19:26.592610 ignition[795]: disks: disks passed Dec 13 13:19:26.596529 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:19:26.592669 ignition[795]: Ignition finished successfully Dec 13 13:19:26.598455 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:19:26.600480 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:19:26.601095 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:19:26.601421 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:19:26.615048 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:19:26.630678 systemd-fsck[806]: ROOT: clean, 14/553520 files, 52654/553472 blocks Dec 13 13:19:26.639556 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:19:26.651998 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:19:26.761848 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:19:26.762759 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:19:26.764702 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:19:26.780876 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:19:26.782898 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:19:26.784544 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:19:26.793659 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (815) Dec 13 13:19:26.793689 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:19:26.793701 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:19:26.793712 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:19:26.784590 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:19:26.798552 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:19:26.784613 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:19:26.794450 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:19:26.799803 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:19:26.807935 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:19:26.848916 initrd-setup-root[840]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:19:26.853647 initrd-setup-root[847]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:19:26.858764 initrd-setup-root[854]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:19:26.862673 initrd-setup-root[861]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:19:26.969950 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:19:26.985044 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:19:26.986872 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:19:26.999812 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:19:27.021714 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:19:27.046755 ignition[930]: INFO : Ignition 2.20.0 Dec 13 13:19:27.046755 ignition[930]: INFO : Stage: mount Dec 13 13:19:27.049725 ignition[930]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:19:27.049725 ignition[930]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:19:27.049725 ignition[930]: INFO : mount: mount passed Dec 13 13:19:27.049725 ignition[930]: INFO : Ignition finished successfully Dec 13 13:19:27.055515 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:19:27.073027 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:19:27.155674 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:19:27.202991 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:19:27.230811 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (943) Dec 13 13:19:27.230885 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:19:27.232649 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:19:27.232671 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:19:27.235804 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:19:27.237494 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:19:27.267700 ignition[960]: INFO : Ignition 2.20.0 Dec 13 13:19:27.267700 ignition[960]: INFO : Stage: files Dec 13 13:19:27.269990 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:19:27.269990 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:19:27.269990 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:19:27.269990 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:19:27.269990 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:19:27.277898 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:19:27.277898 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:19:27.277898 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:19:27.277898 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:19:27.277898 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:19:27.277898 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:19:27.277898 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:19:27.277898 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:19:27.277898 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:19:27.277898 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:19:27.277898 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Dec 13 13:19:27.272413 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 13:19:27.647189 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 13:19:27.787071 systemd-networkd[783]: eth0: Gained IPv6LL Dec 13 13:19:28.259273 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Dec 13 13:19:28.259273 ignition[960]: INFO : files: op(7): [started] processing unit "coreos-metadata.service" Dec 13 13:19:28.263461 ignition[960]: INFO : files: op(7): op(8): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:19:28.263461 ignition[960]: INFO : files: op(7): op(8): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Dec 13 13:19:28.263461 ignition[960]: INFO : files: op(7): [finished] processing unit "coreos-metadata.service" Dec 13 13:19:28.263461 ignition[960]: INFO : files: op(9): [started] setting preset to disabled for "coreos-metadata.service" Dec 13 13:19:28.288712 ignition[960]: INFO : files: op(9): op(a): [started] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:19:28.294185 ignition[960]: INFO : files: op(9): op(a): [finished] removing enablement symlink(s) for "coreos-metadata.service" Dec 13 13:19:28.295941 ignition[960]: INFO : files: op(9): [finished] setting preset to disabled for "coreos-metadata.service" Dec 13 13:19:28.295941 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:19:28.295941 ignition[960]: INFO : files: createResultFile: createFiles: op(b): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:19:28.295941 ignition[960]: INFO : files: files passed Dec 13 13:19:28.295941 ignition[960]: INFO : Ignition finished successfully Dec 13 13:19:28.297693 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:19:28.314946 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:19:28.317921 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:19:28.320495 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:19:28.320637 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:19:28.329303 initrd-setup-root-after-ignition[989]: grep: /sysroot/oem/oem-release: No such file or directory Dec 13 13:19:28.332556 initrd-setup-root-after-ignition[991]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:19:28.332556 initrd-setup-root-after-ignition[991]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:19:28.337120 initrd-setup-root-after-ignition[995]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:19:28.335609 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:19:28.337303 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:19:28.345928 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:19:28.374864 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:19:28.374998 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:19:28.377309 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:19:28.379417 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:19:28.381459 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:19:28.394921 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:19:28.411266 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:19:28.435907 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:19:28.457998 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:19:28.469355 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:19:28.469919 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:19:28.470400 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:19:28.470564 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:19:28.474116 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:19:28.474507 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:19:28.474863 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:19:28.475423 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:19:28.476341 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:19:28.476702 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:19:28.477284 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:19:28.477643 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:19:28.478165 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:19:28.478541 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:19:28.479075 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:19:28.479267 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:19:28.496510 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:19:28.496862 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:19:28.497324 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:19:28.497475 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:19:28.503160 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:19:28.503278 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:19:28.507438 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:19:28.507562 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:19:28.508356 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:19:28.508640 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:19:28.516885 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:19:28.519847 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:19:28.521014 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:19:28.522993 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:19:28.523145 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:19:28.525100 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:19:28.525227 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:19:28.527214 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:19:28.527395 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:19:28.529611 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:19:28.529763 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:19:28.548070 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:19:28.551062 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:19:28.551564 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:19:28.551730 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:19:28.552585 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:19:28.552723 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:19:28.558225 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:19:28.558368 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:19:28.566424 ignition[1016]: INFO : Ignition 2.20.0 Dec 13 13:19:28.566424 ignition[1016]: INFO : Stage: umount Dec 13 13:19:28.566424 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:19:28.566424 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Dec 13 13:19:28.566424 ignition[1016]: INFO : umount: umount passed Dec 13 13:19:28.566424 ignition[1016]: INFO : Ignition finished successfully Dec 13 13:19:28.574554 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:19:28.574717 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:19:28.575502 systemd[1]: Stopped target network.target - Network. Dec 13 13:19:28.579763 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:19:28.580796 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:19:28.583095 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:19:28.584137 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:19:28.586309 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:19:28.587326 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:19:28.589456 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:19:28.590533 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:19:28.593004 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:19:28.595492 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:19:28.598992 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:19:28.601863 systemd-networkd[783]: eth0: DHCPv6 lease lost Dec 13 13:19:28.604020 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:19:28.604193 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:19:28.606699 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:19:28.606843 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:19:28.610213 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:19:28.610313 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:19:28.616926 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:19:28.619104 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:19:28.620348 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:19:28.623157 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:19:28.623216 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:19:28.626426 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:19:28.626485 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:19:28.629598 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:19:28.629654 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:19:28.633249 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:19:28.646008 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:19:28.647230 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:19:28.651745 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:19:28.653122 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:19:28.656536 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:19:28.656631 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:19:28.660083 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:19:28.660165 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:19:28.663572 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:19:28.664602 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:19:28.666750 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:19:28.667676 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:19:28.669736 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:19:28.670719 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:19:28.685912 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:19:28.688167 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:19:28.688230 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:19:28.690557 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Dec 13 13:19:28.690608 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:19:28.695588 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:19:28.695649 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:19:28.699158 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:19:28.699216 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:19:28.703332 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:19:28.704894 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:19:28.713942 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:19:28.714985 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:19:28.716965 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:19:28.718963 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:19:28.719914 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:19:28.734907 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:19:28.743001 systemd[1]: Switching root. Dec 13 13:19:28.777852 systemd-journald[194]: Journal stopped Dec 13 13:19:29.837769 systemd-journald[194]: Received SIGTERM from PID 1 (systemd). Dec 13 13:19:29.837855 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:19:29.837869 kernel: SELinux: policy capability open_perms=1 Dec 13 13:19:29.837886 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:19:29.837898 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:19:29.837911 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:19:29.837928 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:19:29.837939 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:19:29.837951 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:19:29.837963 kernel: audit: type=1403 audit(1734095969.091:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:19:29.837976 systemd[1]: Successfully loaded SELinux policy in 40.166ms. Dec 13 13:19:29.838003 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 13.194ms. Dec 13 13:19:29.838021 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:19:29.838034 systemd[1]: Detected virtualization kvm. Dec 13 13:19:29.838047 systemd[1]: Detected architecture x86-64. Dec 13 13:19:29.838059 systemd[1]: Detected first boot. Dec 13 13:19:29.838072 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:19:29.838087 zram_generator::config[1060]: No configuration found. Dec 13 13:19:29.838101 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:19:29.838114 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:19:29.838126 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:19:29.838139 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:19:29.838153 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:19:29.838170 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:19:29.838183 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:19:29.838198 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:19:29.838212 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:19:29.838225 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:19:29.838238 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:19:29.838251 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:19:29.838263 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:19:29.838276 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:19:29.838289 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:19:29.838301 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:19:29.838324 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:19:29.838337 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:19:29.838349 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:19:29.838362 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:19:29.838375 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:19:29.838387 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:19:29.838400 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:19:29.838415 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:19:29.838428 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:19:29.838440 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:19:29.838453 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:19:29.838466 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:19:29.838479 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:19:29.838493 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:19:29.838506 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:19:29.838519 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:19:29.838531 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:19:29.838551 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:19:29.838564 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:19:29.838577 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:19:29.838590 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:19:29.838603 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:19:29.838616 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:19:29.838628 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:19:29.838641 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:19:29.838657 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:19:29.838669 systemd[1]: Reached target machines.target - Containers. Dec 13 13:19:29.838682 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:19:29.838695 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:19:29.838708 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:19:29.838726 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:19:29.838739 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:19:29.838752 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:19:29.838764 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:19:29.838812 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:19:29.838826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:19:29.838839 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:19:29.838852 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:19:29.838865 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:19:29.838877 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:19:29.838890 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:19:29.838902 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:19:29.838918 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:19:29.838931 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:19:29.838944 kernel: loop: module loaded Dec 13 13:19:29.838980 systemd-journald[1129]: Collecting audit messages is disabled. Dec 13 13:19:29.839004 kernel: fuse: init (API version 7.39) Dec 13 13:19:29.839016 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:19:29.839028 systemd-journald[1129]: Journal started Dec 13 13:19:29.839055 systemd-journald[1129]: Runtime Journal (/run/log/journal/d833bb0f5bb4412fba98951be0803e95) is 6.0M, max 48.2M, 42.2M free. Dec 13 13:19:29.626044 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:19:29.647504 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:19:29.648000 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:19:29.860908 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:19:29.895073 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:19:29.895159 systemd[1]: Stopped verity-setup.service. Dec 13 13:19:29.895176 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:19:29.903890 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:19:29.903953 kernel: ACPI: bus type drm_connector registered Dec 13 13:19:29.899164 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:19:29.900377 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:19:29.901722 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:19:29.902925 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:19:29.904789 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:19:29.906131 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:19:29.907402 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:19:29.908962 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:19:29.910620 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:19:29.910808 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:19:29.912441 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:19:29.912611 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:19:29.914074 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:19:29.914250 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:19:29.915647 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:19:29.915834 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:19:29.917467 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:19:29.917639 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:19:29.919071 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:19:29.919241 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:19:29.920655 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:19:29.922096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:19:29.923741 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:19:29.938246 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:19:29.951884 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:19:29.954187 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:19:29.955384 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:19:29.955415 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:19:29.957402 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:19:29.959724 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:19:29.965575 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:19:29.966138 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:19:29.968387 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:19:29.974817 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:19:29.976319 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:19:29.977938 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:19:29.979089 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:19:29.982954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:19:29.985436 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:19:29.990887 systemd-journald[1129]: Time spent on flushing to /var/log/journal/d833bb0f5bb4412fba98951be0803e95 is 19.545ms for 1023 entries. Dec 13 13:19:29.990887 systemd-journald[1129]: System Journal (/var/log/journal/d833bb0f5bb4412fba98951be0803e95) is 8.0M, max 195.6M, 187.6M free. Dec 13 13:19:30.047284 systemd-journald[1129]: Received client request to flush runtime journal. Dec 13 13:19:30.047329 kernel: loop0: detected capacity change from 0 to 138184 Dec 13 13:19:29.992318 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:19:29.995418 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:19:30.000200 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:19:30.001821 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:19:30.003378 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:19:30.030073 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:19:30.031765 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:19:30.033702 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:19:30.039237 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:19:30.051275 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:19:30.060439 udevadm[1181]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Dec 13 13:19:30.063428 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:19:30.066803 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:19:30.070419 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Dec 13 13:19:30.070440 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Dec 13 13:19:30.078353 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:19:30.079445 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:19:30.081314 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:19:30.089197 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:19:30.093707 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:19:30.118940 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:19:30.199455 kernel: loop2: detected capacity change from 0 to 211296 Dec 13 13:19:30.197854 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:19:30.217318 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Dec 13 13:19:30.217340 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Dec 13 13:19:30.223911 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:19:30.235797 kernel: loop3: detected capacity change from 0 to 138184 Dec 13 13:19:30.252815 kernel: loop4: detected capacity change from 0 to 141000 Dec 13 13:19:30.268823 kernel: loop5: detected capacity change from 0 to 211296 Dec 13 13:19:30.277729 (sd-merge)[1202]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes'. Dec 13 13:19:30.279404 (sd-merge)[1202]: Merged extensions into '/usr'. Dec 13 13:19:30.348965 systemd[1]: Reloading requested from client PID 1174 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:19:30.348980 systemd[1]: Reloading... Dec 13 13:19:30.439817 zram_generator::config[1224]: No configuration found. Dec 13 13:19:30.501095 ldconfig[1169]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:19:30.575397 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:19:30.634097 systemd[1]: Reloading finished in 284 ms. Dec 13 13:19:30.673800 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:19:30.675533 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:19:30.692107 systemd[1]: Starting ensure-sysext.service... Dec 13 13:19:30.694879 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:19:30.702485 systemd[1]: Reloading requested from client PID 1265 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:19:30.702505 systemd[1]: Reloading... Dec 13 13:19:30.740602 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:19:30.741004 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:19:30.742319 systemd-tmpfiles[1267]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:19:30.742722 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Dec 13 13:19:30.745317 systemd-tmpfiles[1267]: ACLs are not supported, ignoring. Dec 13 13:19:30.750709 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:19:30.750730 systemd-tmpfiles[1267]: Skipping /boot Dec 13 13:19:30.773911 systemd-tmpfiles[1267]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:19:30.773936 systemd-tmpfiles[1267]: Skipping /boot Dec 13 13:19:30.900594 zram_generator::config[1297]: No configuration found. Dec 13 13:19:31.072343 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:19:31.152842 systemd[1]: Reloading finished in 449 ms. Dec 13 13:19:31.175593 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:19:31.186759 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:19:31.209186 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:19:31.212847 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:19:31.216162 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:19:31.222623 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:19:31.230086 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:19:31.236076 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:19:31.249419 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:19:31.253634 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:19:31.253864 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:19:31.262378 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:19:31.270928 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:19:31.278201 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:19:31.280123 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:19:31.280288 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:19:31.282335 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:19:31.285693 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:19:31.285942 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:19:31.287163 systemd-udevd[1341]: Using default interface naming scheme 'v255'. Dec 13 13:19:31.291426 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:19:31.292019 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:19:31.295056 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:19:31.296054 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:19:31.308424 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:19:31.309013 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:19:31.311177 augenrules[1366]: No rules Dec 13 13:19:31.316835 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:19:31.321138 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:19:31.324468 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:19:31.325828 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:19:31.332988 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:19:31.334453 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:19:31.336339 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:19:31.338995 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:19:31.339354 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:19:31.341299 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:19:31.343528 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:19:31.346155 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:19:31.354252 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:19:31.354575 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:19:31.358358 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:19:31.358642 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:19:31.361374 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:19:31.361620 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:19:31.384296 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:19:31.392867 systemd[1]: Finished ensure-sysext.service. Dec 13 13:19:31.404234 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:19:31.417137 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:19:31.418703 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:19:31.424050 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:19:31.431805 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1390) Dec 13 13:19:31.439678 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:19:31.441844 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1390) Dec 13 13:19:31.446652 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:19:31.450521 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:19:31.453175 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:19:31.456703 augenrules[1408]: /sbin/augenrules: No change Dec 13 13:19:31.457157 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:19:31.462056 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:19:31.464031 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:19:31.464083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:19:31.466044 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:19:31.466359 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:19:31.468457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:19:31.468741 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:19:31.475432 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:19:31.476236 systemd-resolved[1337]: Positive Trust Anchors: Dec 13 13:19:31.476250 systemd-resolved[1337]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:19:31.476292 systemd-resolved[1337]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:19:31.477205 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:19:31.477917 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:19:31.481689 augenrules[1438]: No rules Dec 13 13:19:31.484826 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1385) Dec 13 13:19:31.487389 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:19:31.487826 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:19:31.543546 systemd-resolved[1337]: Defaulting to hostname 'linux'. Dec 13 13:19:31.715853 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Dec 13 13:19:31.720112 kernel: i801_smbus 0000:00:1f.3: Enabling SMBus device Dec 13 13:19:31.720440 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:19:31.720625 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:19:31.722292 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:19:31.719225 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:19:31.724181 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:19:31.724475 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:19:31.736967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Dec 13 13:19:31.745840 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:19:31.747500 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:19:31.749085 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:19:31.749196 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:19:31.779025 systemd-networkd[1427]: lo: Link UP Dec 13 13:19:31.779037 systemd-networkd[1427]: lo: Gained carrier Dec 13 13:19:31.783468 systemd-networkd[1427]: Enumeration completed Dec 13 13:19:31.784304 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:19:31.784309 systemd-networkd[1427]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:19:31.785046 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:19:31.785767 systemd-networkd[1427]: eth0: Link UP Dec 13 13:19:31.785771 systemd-networkd[1427]: eth0: Gained carrier Dec 13 13:19:31.785818 systemd-networkd[1427]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:19:31.786493 systemd[1]: Reached target network.target - Network. Dec 13 13:19:31.794008 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:19:31.877058 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:19:31.948817 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:19:31.953012 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:19:31.955639 systemd-networkd[1427]: eth0: DHCPv4 address 10.0.0.30/16, gateway 10.0.0.1 acquired from 10.0.0.1 Dec 13 13:19:31.956707 systemd-timesyncd[1429]: Network configuration changed, trying to establish connection. Dec 13 13:19:32.710404 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:19:32.710667 systemd-resolved[1337]: Clock change detected. Flushing caches. Dec 13 13:19:32.711145 systemd-timesyncd[1429]: Contacted time server 10.0.0.1:123 (10.0.0.1). Dec 13 13:19:32.711195 systemd-timesyncd[1429]: Initial clock synchronization to Fri 2024-12-13 13:19:32.709358 UTC. Dec 13 13:19:32.719843 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:19:32.723087 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:19:32.738355 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:19:32.783982 kernel: kvm_amd: TSC scaling supported Dec 13 13:19:32.784157 kernel: kvm_amd: Nested Virtualization enabled Dec 13 13:19:32.784173 kernel: kvm_amd: Nested Paging enabled Dec 13 13:19:32.785153 kernel: kvm_amd: LBR virtualization supported Dec 13 13:19:32.785173 kernel: kvm_amd: Virtual VMLOAD VMSAVE supported Dec 13 13:19:32.785830 kernel: kvm_amd: Virtual GIF supported Dec 13 13:19:32.809637 kernel: EDAC MC: Ver: 3.0.0 Dec 13 13:19:32.823203 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:19:32.849119 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:19:32.873286 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:19:32.883763 lvm[1465]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:19:32.917251 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:19:32.919755 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:19:32.920953 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:19:32.922213 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:19:32.923591 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:19:32.925180 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:19:32.926421 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:19:32.927710 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:19:32.928993 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:19:32.929031 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:19:32.929966 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:19:32.932282 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:19:32.935427 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:19:32.952067 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:19:32.954987 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:19:32.956892 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:19:32.958102 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:19:32.959097 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:19:32.960126 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:19:32.960168 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:19:32.961449 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:19:32.963799 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:19:32.967676 lvm[1469]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:19:32.968684 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:19:32.971812 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:19:32.973081 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:19:32.976331 jq[1472]: false Dec 13 13:19:32.977777 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:19:32.981836 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:19:32.984929 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:19:32.990909 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:19:32.993794 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:19:32.994403 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:19:32.996735 extend-filesystems[1473]: Found loop3 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found loop4 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found loop5 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found sr0 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found vda Dec 13 13:19:32.996735 extend-filesystems[1473]: Found vda1 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found vda2 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found vda3 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found usr Dec 13 13:19:32.996735 extend-filesystems[1473]: Found vda4 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found vda6 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found vda7 Dec 13 13:19:32.996735 extend-filesystems[1473]: Found vda9 Dec 13 13:19:32.996735 extend-filesystems[1473]: Checking size of /dev/vda9 Dec 13 13:19:33.027478 extend-filesystems[1473]: Resized partition /dev/vda9 Dec 13 13:19:33.016454 dbus-daemon[1471]: [system] SELinux support is enabled Dec 13 13:19:32.998730 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:19:33.005343 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:19:33.013086 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:19:33.029039 jq[1487]: true Dec 13 13:19:33.013332 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:19:33.013733 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:19:33.013939 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:19:33.021600 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:19:33.028675 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:19:33.031222 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:19:33.033646 extend-filesystems[1492]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:19:33.031437 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:19:33.036490 update_engine[1484]: I20241213 13:19:33.035615 1484 main.cc:92] Flatcar Update Engine starting Dec 13 13:19:33.046673 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 1864699 blocks Dec 13 13:19:33.046728 update_engine[1484]: I20241213 13:19:33.038674 1484 update_check_scheduler.cc:74] Next update check in 3m55s Dec 13 13:19:33.046881 jq[1494]: true Dec 13 13:19:33.051817 (ntainerd)[1497]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:19:33.060598 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1385) Dec 13 13:19:33.064046 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:19:33.064093 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:19:33.066078 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:19:33.066115 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:19:33.073219 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:19:33.075934 kernel: EXT4-fs (vda9): resized filesystem to 1864699 Dec 13 13:19:33.098627 systemd-logind[1479]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 13:19:33.098662 systemd-logind[1479]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:19:33.099087 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:19:33.100453 extend-filesystems[1492]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:19:33.100453 extend-filesystems[1492]: old_desc_blocks = 1, new_desc_blocks = 1 Dec 13 13:19:33.100453 extend-filesystems[1492]: The filesystem on /dev/vda9 is now 1864699 (4k) blocks long. Dec 13 13:19:33.116973 extend-filesystems[1473]: Resized filesystem in /dev/vda9 Dec 13 13:19:33.118342 bash[1520]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:19:33.102294 systemd-logind[1479]: New seat seat0. Dec 13 13:19:33.107059 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:19:33.108862 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:19:33.109189 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:19:33.114230 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:19:33.121231 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Dec 13 13:19:33.207767 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:19:33.259242 sshd_keygen[1498]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:19:33.361064 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:19:33.371811 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:19:33.380487 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:19:33.380738 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:19:33.383693 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:19:33.408124 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:19:33.415878 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:19:33.418100 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:19:33.419522 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:19:33.483259 containerd[1497]: time="2024-12-13T13:19:33.483143550Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:19:33.509478 containerd[1497]: time="2024-12-13T13:19:33.509379041Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:19:33.511589 containerd[1497]: time="2024-12-13T13:19:33.511527891Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:19:33.511589 containerd[1497]: time="2024-12-13T13:19:33.511559390Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:19:33.511639 containerd[1497]: time="2024-12-13T13:19:33.511590028Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:19:33.511826 containerd[1497]: time="2024-12-13T13:19:33.511794511Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:19:33.511826 containerd[1497]: time="2024-12-13T13:19:33.511817534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:19:33.511925 containerd[1497]: time="2024-12-13T13:19:33.511895190Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:19:33.511925 containerd[1497]: time="2024-12-13T13:19:33.511912743Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:19:33.512152 containerd[1497]: time="2024-12-13T13:19:33.512119611Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:19:33.512152 containerd[1497]: time="2024-12-13T13:19:33.512141181Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:19:33.512194 containerd[1497]: time="2024-12-13T13:19:33.512154737Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:19:33.512194 containerd[1497]: time="2024-12-13T13:19:33.512165226Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:19:33.512309 containerd[1497]: time="2024-12-13T13:19:33.512278268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:19:33.512621 containerd[1497]: time="2024-12-13T13:19:33.512590424Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:19:33.512750 containerd[1497]: time="2024-12-13T13:19:33.512719536Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:19:33.512750 containerd[1497]: time="2024-12-13T13:19:33.512737179Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:19:33.512884 containerd[1497]: time="2024-12-13T13:19:33.512856493Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:19:33.512952 containerd[1497]: time="2024-12-13T13:19:33.512925372Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:19:33.521299 containerd[1497]: time="2024-12-13T13:19:33.521229976Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:19:33.521413 containerd[1497]: time="2024-12-13T13:19:33.521317761Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:19:33.521413 containerd[1497]: time="2024-12-13T13:19:33.521337327Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:19:33.521413 containerd[1497]: time="2024-12-13T13:19:33.521354259Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:19:33.521413 containerd[1497]: time="2024-12-13T13:19:33.521370229Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:19:33.521637 containerd[1497]: time="2024-12-13T13:19:33.521607554Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:19:33.521947 containerd[1497]: time="2024-12-13T13:19:33.521920501Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:19:33.741423 containerd[1497]: time="2024-12-13T13:19:33.741324432Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:19:33.741423 containerd[1497]: time="2024-12-13T13:19:33.741407938Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:19:33.741423 containerd[1497]: time="2024-12-13T13:19:33.741436652Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:19:33.741639 containerd[1497]: time="2024-12-13T13:19:33.741451159Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:19:33.741639 containerd[1497]: time="2024-12-13T13:19:33.741465677Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:19:33.741639 containerd[1497]: time="2024-12-13T13:19:33.741489742Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:19:33.741639 containerd[1497]: time="2024-12-13T13:19:33.741509759Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:19:33.741639 containerd[1497]: time="2024-12-13T13:19:33.741538774Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:19:33.741639 containerd[1497]: time="2024-12-13T13:19:33.741569371Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:19:33.741639 containerd[1497]: time="2024-12-13T13:19:33.741607963Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:19:33.741639 containerd[1497]: time="2024-12-13T13:19:33.741620287Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741648640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741672955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741693213Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741716036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741729672Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741742636Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741754949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741778072Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741791738Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741806937Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.741812 containerd[1497]: time="2024-12-13T13:19:33.741818719Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.742036 containerd[1497]: time="2024-12-13T13:19:33.741831693Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.742036 containerd[1497]: time="2024-12-13T13:19:33.741845088Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.742036 containerd[1497]: time="2024-12-13T13:19:33.741861168Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:19:33.742036 containerd[1497]: time="2024-12-13T13:19:33.741892858Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.742036 containerd[1497]: time="2024-12-13T13:19:33.741920369Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.742036 containerd[1497]: time="2024-12-13T13:19:33.741933404Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:19:33.742036 containerd[1497]: time="2024-12-13T13:19:33.742005208Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:19:33.742167 containerd[1497]: time="2024-12-13T13:19:33.742038862Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:19:33.742167 containerd[1497]: time="2024-12-13T13:19:33.742051816Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:19:33.742167 containerd[1497]: time="2024-12-13T13:19:33.742063929Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:19:33.742167 containerd[1497]: time="2024-12-13T13:19:33.742073817Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.743130 containerd[1497]: time="2024-12-13T13:19:33.742307135Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:19:33.743130 containerd[1497]: time="2024-12-13T13:19:33.742439222Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:19:33.743130 containerd[1497]: time="2024-12-13T13:19:33.742670486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:19:33.743314 containerd[1497]: time="2024-12-13T13:19:33.743253009Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:19:33.743314 containerd[1497]: time="2024-12-13T13:19:33.743319533Z" level=info msg="Connect containerd service" Dec 13 13:19:33.743743 containerd[1497]: time="2024-12-13T13:19:33.743689106Z" level=info msg="using legacy CRI server" Dec 13 13:19:33.743775 containerd[1497]: time="2024-12-13T13:19:33.743747095Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:19:33.743994 containerd[1497]: time="2024-12-13T13:19:33.743964403Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:19:33.745026 containerd[1497]: time="2024-12-13T13:19:33.744986910Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:19:33.745259 containerd[1497]: time="2024-12-13T13:19:33.745196844Z" level=info msg="Start subscribing containerd event" Dec 13 13:19:33.745366 containerd[1497]: time="2024-12-13T13:19:33.745296661Z" level=info msg="Start recovering state" Dec 13 13:19:33.745541 containerd[1497]: time="2024-12-13T13:19:33.745513347Z" level=info msg="Start event monitor" Dec 13 13:19:33.745541 containerd[1497]: time="2024-12-13T13:19:33.745524118Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:19:33.745598 containerd[1497]: time="2024-12-13T13:19:33.745562800Z" level=info msg="Start snapshots syncer" Dec 13 13:19:33.745598 containerd[1497]: time="2024-12-13T13:19:33.745590021Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:19:33.745636 containerd[1497]: time="2024-12-13T13:19:33.745600521Z" level=info msg="Start streaming server" Dec 13 13:19:33.745636 containerd[1497]: time="2024-12-13T13:19:33.745617723Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:19:33.746193 containerd[1497]: time="2024-12-13T13:19:33.745704115Z" level=info msg="containerd successfully booted in 0.264905s" Dec 13 13:19:33.745840 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:19:33.849987 systemd-networkd[1427]: eth0: Gained IPv6LL Dec 13 13:19:33.854277 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:19:33.856502 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:19:33.869889 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:19:33.873167 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:19:33.876509 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:19:33.915000 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:19:33.917226 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:19:33.917502 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:19:33.920418 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:19:35.996547 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:19:35.998327 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:19:35.999697 systemd[1]: Startup finished in 1.350s (kernel) + 5.378s (initrd) + 6.195s (userspace) = 12.924s. Dec 13 13:19:36.002393 (kubelet)[1576]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:19:36.061527 agetty[1549]: failed to open credentials directory Dec 13 13:19:36.061984 agetty[1550]: failed to open credentials directory Dec 13 13:19:36.852739 kubelet[1576]: E1213 13:19:36.852612 1576 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:19:36.857740 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:19:36.858003 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:19:36.858459 systemd[1]: kubelet.service: Consumed 2.833s CPU time. Dec 13 13:19:38.687741 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:19:38.689069 systemd[1]: Started sshd@0-10.0.0.30:22-10.0.0.1:53324.service - OpenSSH per-connection server daemon (10.0.0.1:53324). Dec 13 13:19:38.746324 sshd[1591]: Accepted publickey for core from 10.0.0.1 port 53324 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:19:38.748471 sshd-session[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:38.758133 systemd-logind[1479]: New session 1 of user core. Dec 13 13:19:38.759486 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:19:38.768801 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:19:38.782058 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:19:38.797895 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:19:38.800727 (systemd)[1595]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:19:38.913865 systemd[1595]: Queued start job for default target default.target. Dec 13 13:19:38.926925 systemd[1595]: Created slice app.slice - User Application Slice. Dec 13 13:19:38.926953 systemd[1595]: Reached target paths.target - Paths. Dec 13 13:19:38.926968 systemd[1595]: Reached target timers.target - Timers. Dec 13 13:19:38.928537 systemd[1595]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:19:38.940476 systemd[1595]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:19:38.940625 systemd[1595]: Reached target sockets.target - Sockets. Dec 13 13:19:38.940646 systemd[1595]: Reached target basic.target - Basic System. Dec 13 13:19:38.940687 systemd[1595]: Reached target default.target - Main User Target. Dec 13 13:19:38.940722 systemd[1595]: Startup finished in 132ms. Dec 13 13:19:38.941025 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:19:38.942646 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:19:39.006769 systemd[1]: Started sshd@1-10.0.0.30:22-10.0.0.1:53338.service - OpenSSH per-connection server daemon (10.0.0.1:53338). Dec 13 13:19:39.055808 sshd[1606]: Accepted publickey for core from 10.0.0.1 port 53338 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:19:39.057195 sshd-session[1606]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:39.061777 systemd-logind[1479]: New session 2 of user core. Dec 13 13:19:39.072714 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:19:39.128340 sshd[1608]: Connection closed by 10.0.0.1 port 53338 Dec 13 13:19:39.128988 sshd-session[1606]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:39.146753 systemd[1]: sshd@1-10.0.0.30:22-10.0.0.1:53338.service: Deactivated successfully. Dec 13 13:19:39.148837 systemd[1]: session-2.scope: Deactivated successfully. Dec 13 13:19:39.150499 systemd-logind[1479]: Session 2 logged out. Waiting for processes to exit. Dec 13 13:19:39.157827 systemd[1]: Started sshd@2-10.0.0.30:22-10.0.0.1:53340.service - OpenSSH per-connection server daemon (10.0.0.1:53340). Dec 13 13:19:39.158953 systemd-logind[1479]: Removed session 2. Dec 13 13:19:39.197491 sshd[1613]: Accepted publickey for core from 10.0.0.1 port 53340 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:19:39.199271 sshd-session[1613]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:39.203688 systemd-logind[1479]: New session 3 of user core. Dec 13 13:19:39.213708 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:19:39.263827 sshd[1615]: Connection closed by 10.0.0.1 port 53340 Dec 13 13:19:39.264246 sshd-session[1613]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:39.280435 systemd[1]: sshd@2-10.0.0.30:22-10.0.0.1:53340.service: Deactivated successfully. Dec 13 13:19:39.282386 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:19:39.283936 systemd-logind[1479]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:19:39.285284 systemd[1]: Started sshd@3-10.0.0.30:22-10.0.0.1:53348.service - OpenSSH per-connection server daemon (10.0.0.1:53348). Dec 13 13:19:39.286105 systemd-logind[1479]: Removed session 3. Dec 13 13:19:39.328999 sshd[1620]: Accepted publickey for core from 10.0.0.1 port 53348 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:19:39.330641 sshd-session[1620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:39.334732 systemd-logind[1479]: New session 4 of user core. Dec 13 13:19:39.344687 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:19:39.401649 sshd[1622]: Connection closed by 10.0.0.1 port 53348 Dec 13 13:19:39.402053 sshd-session[1620]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:39.420601 systemd[1]: sshd@3-10.0.0.30:22-10.0.0.1:53348.service: Deactivated successfully. Dec 13 13:19:39.422733 systemd[1]: session-4.scope: Deactivated successfully. Dec 13 13:19:39.424377 systemd-logind[1479]: Session 4 logged out. Waiting for processes to exit. Dec 13 13:19:39.434840 systemd[1]: Started sshd@4-10.0.0.30:22-10.0.0.1:53362.service - OpenSSH per-connection server daemon (10.0.0.1:53362). Dec 13 13:19:39.435906 systemd-logind[1479]: Removed session 4. Dec 13 13:19:39.473982 sshd[1627]: Accepted publickey for core from 10.0.0.1 port 53362 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:19:39.475416 sshd-session[1627]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:39.479645 systemd-logind[1479]: New session 5 of user core. Dec 13 13:19:39.494702 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:19:39.553777 sudo[1630]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:19:39.554137 sudo[1630]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:19:39.576533 sudo[1630]: pam_unix(sudo:session): session closed for user root Dec 13 13:19:39.578301 sshd[1629]: Connection closed by 10.0.0.1 port 53362 Dec 13 13:19:39.578757 sshd-session[1627]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:39.586514 systemd[1]: sshd@4-10.0.0.30:22-10.0.0.1:53362.service: Deactivated successfully. Dec 13 13:19:39.588497 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:19:39.590096 systemd-logind[1479]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:19:39.606874 systemd[1]: Started sshd@5-10.0.0.30:22-10.0.0.1:53364.service - OpenSSH per-connection server daemon (10.0.0.1:53364). Dec 13 13:19:39.607822 systemd-logind[1479]: Removed session 5. Dec 13 13:19:39.646340 sshd[1635]: Accepted publickey for core from 10.0.0.1 port 53364 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:19:39.648020 sshd-session[1635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:39.652238 systemd-logind[1479]: New session 6 of user core. Dec 13 13:19:39.661708 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:19:39.716316 sudo[1639]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:19:39.716785 sudo[1639]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:19:39.720706 sudo[1639]: pam_unix(sudo:session): session closed for user root Dec 13 13:19:39.727269 sudo[1638]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:19:39.727626 sudo[1638]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:19:39.746862 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:19:39.792654 augenrules[1661]: No rules Dec 13 13:19:39.795840 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:19:39.796116 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:19:39.797551 sudo[1638]: pam_unix(sudo:session): session closed for user root Dec 13 13:19:39.799205 sshd[1637]: Connection closed by 10.0.0.1 port 53364 Dec 13 13:19:39.799601 sshd-session[1635]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:39.809493 systemd[1]: sshd@5-10.0.0.30:22-10.0.0.1:53364.service: Deactivated successfully. Dec 13 13:19:39.811782 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:19:39.813264 systemd-logind[1479]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:19:39.828941 systemd[1]: Started sshd@6-10.0.0.30:22-10.0.0.1:53368.service - OpenSSH per-connection server daemon (10.0.0.1:53368). Dec 13 13:19:39.829944 systemd-logind[1479]: Removed session 6. Dec 13 13:19:39.866648 sshd[1669]: Accepted publickey for core from 10.0.0.1 port 53368 ssh2: RSA SHA256:yf+4O3zwFQcbHDj3qU3Xkqd6O3VKExr7ZIjl7U8lXx4 Dec 13 13:19:39.868064 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:19:39.871802 systemd-logind[1479]: New session 7 of user core. Dec 13 13:19:39.882694 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:19:39.935150 sudo[1672]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:19:39.935605 sudo[1672]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:19:39.958970 systemd[1]: Starting coreos-metadata.service - QEMU metadata agent... Dec 13 13:19:39.978461 systemd[1]: coreos-metadata.service: Deactivated successfully. Dec 13 13:19:39.978732 systemd[1]: Finished coreos-metadata.service - QEMU metadata agent. Dec 13 13:19:40.720386 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:19:40.720644 systemd[1]: kubelet.service: Consumed 2.833s CPU time. Dec 13 13:19:40.735080 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:19:40.755101 systemd[1]: Reloading requested from client PID 1719 ('systemctl') (unit session-7.scope)... Dec 13 13:19:40.755122 systemd[1]: Reloading... Dec 13 13:19:40.837684 zram_generator::config[1757]: No configuration found. Dec 13 13:19:41.173812 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:19:41.264304 systemd[1]: Reloading finished in 508 ms. Dec 13 13:19:41.327359 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:19:41.329781 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:19:41.334200 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:19:41.334486 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:19:41.348034 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:19:41.502653 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:19:41.508164 (kubelet)[1807]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:19:41.564508 kubelet[1807]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:19:41.564508 kubelet[1807]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:19:41.564508 kubelet[1807]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:19:41.565557 kubelet[1807]: I1213 13:19:41.565485 1807 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:19:41.793266 kubelet[1807]: I1213 13:19:41.791552 1807 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Dec 13 13:19:41.793266 kubelet[1807]: I1213 13:19:41.791634 1807 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:19:41.793266 kubelet[1807]: I1213 13:19:41.792122 1807 server.go:919] "Client rotation is on, will bootstrap in background" Dec 13 13:19:41.814982 kubelet[1807]: I1213 13:19:41.814938 1807 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:19:41.826626 kubelet[1807]: I1213 13:19:41.826561 1807 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:19:41.827678 kubelet[1807]: I1213 13:19:41.827646 1807 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:19:41.827864 kubelet[1807]: I1213 13:19:41.827835 1807 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:19:41.828466 kubelet[1807]: I1213 13:19:41.828335 1807 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:19:41.828466 kubelet[1807]: I1213 13:19:41.828356 1807 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:19:41.828538 kubelet[1807]: I1213 13:19:41.828505 1807 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:19:41.828646 kubelet[1807]: I1213 13:19:41.828629 1807 kubelet.go:396] "Attempting to sync node with API server" Dec 13 13:19:41.828700 kubelet[1807]: I1213 13:19:41.828651 1807 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:19:41.828700 kubelet[1807]: I1213 13:19:41.828683 1807 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:19:41.828755 kubelet[1807]: I1213 13:19:41.828702 1807 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:19:41.828940 kubelet[1807]: E1213 13:19:41.828907 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:41.828995 kubelet[1807]: E1213 13:19:41.828964 1807 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:41.830181 kubelet[1807]: I1213 13:19:41.830138 1807 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:19:41.832824 kubelet[1807]: I1213 13:19:41.832792 1807 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:19:41.833489 kubelet[1807]: W1213 13:19:41.833464 1807 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:19:41.833560 kubelet[1807]: E1213 13:19:41.833504 1807 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.30" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Dec 13 13:19:41.833658 kubelet[1807]: W1213 13:19:41.833561 1807 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:19:41.833658 kubelet[1807]: E1213 13:19:41.833613 1807 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Dec 13 13:19:41.833869 kubelet[1807]: W1213 13:19:41.833852 1807 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:19:41.834536 kubelet[1807]: I1213 13:19:41.834514 1807 server.go:1256] "Started kubelet" Dec 13 13:19:41.834611 kubelet[1807]: I1213 13:19:41.834602 1807 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:19:41.835599 kubelet[1807]: I1213 13:19:41.834714 1807 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:19:41.835599 kubelet[1807]: I1213 13:19:41.835147 1807 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:19:41.835599 kubelet[1807]: I1213 13:19:41.835454 1807 server.go:461] "Adding debug handlers to kubelet server" Dec 13 13:19:41.839084 kubelet[1807]: I1213 13:19:41.839052 1807 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:19:41.845170 kubelet[1807]: E1213 13:19:41.844055 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:41.845170 kubelet[1807]: I1213 13:19:41.844098 1807 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:19:41.845170 kubelet[1807]: I1213 13:19:41.844217 1807 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Dec 13 13:19:41.845170 kubelet[1807]: I1213 13:19:41.844278 1807 reconciler_new.go:29] "Reconciler: start to sync state" Dec 13 13:19:41.846436 kubelet[1807]: E1213 13:19:41.846405 1807 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:19:41.847078 kubelet[1807]: I1213 13:19:41.847047 1807 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:19:41.847225 kubelet[1807]: I1213 13:19:41.847196 1807 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:19:41.848783 kubelet[1807]: E1213 13:19:41.848758 1807 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.0.0.30\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Dec 13 13:19:41.848928 kubelet[1807]: W1213 13:19:41.848910 1807 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:19:41.849021 kubelet[1807]: E1213 13:19:41.849004 1807 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Dec 13 13:19:41.849674 kubelet[1807]: I1213 13:19:41.849646 1807 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:19:41.850487 kubelet[1807]: E1213 13:19:41.850451 1807 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.30.1810bf1b9bad408a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.30,UID:10.0.0.30,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.0.0.30,},FirstTimestamp:2024-12-13 13:19:41.834485898 +0000 UTC m=+0.321896051,LastTimestamp:2024-12-13 13:19:41.834485898 +0000 UTC m=+0.321896051,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.30,}" Dec 13 13:19:41.856599 kubelet[1807]: E1213 13:19:41.855772 1807 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.30.1810bf1b9c62da58 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.30,UID:10.0.0.30,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.0.0.30,},FirstTimestamp:2024-12-13 13:19:41.846387288 +0000 UTC m=+0.333797441,LastTimestamp:2024-12-13 13:19:41.846387288 +0000 UTC m=+0.333797441,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.30,}" Dec 13 13:19:41.860272 kubelet[1807]: I1213 13:19:41.860187 1807 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:19:41.860272 kubelet[1807]: I1213 13:19:41.860217 1807 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:19:41.860272 kubelet[1807]: I1213 13:19:41.860247 1807 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:19:41.863090 kubelet[1807]: E1213 13:19:41.863039 1807 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.30.1810bf1b9d2732cd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.30,UID:10.0.0.30,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 10.0.0.30 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:10.0.0.30,},FirstTimestamp:2024-12-13 13:19:41.859254989 +0000 UTC m=+0.346665142,LastTimestamp:2024-12-13 13:19:41.859254989 +0000 UTC m=+0.346665142,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.30,}" Dec 13 13:19:41.866389 kubelet[1807]: E1213 13:19:41.866347 1807 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.30.1810bf1b9d274777 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.30,UID:10.0.0.30,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 10.0.0.30 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:10.0.0.30,},FirstTimestamp:2024-12-13 13:19:41.859260279 +0000 UTC m=+0.346670432,LastTimestamp:2024-12-13 13:19:41.859260279 +0000 UTC m=+0.346670432,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.30,}" Dec 13 13:19:42.045521 kubelet[1807]: E1213 13:19:42.045363 1807 event.go:346] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.0.0.30.1810bf1b9d27552a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.0.0.30,UID:10.0.0.30,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node 10.0.0.30 status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:10.0.0.30,},FirstTimestamp:2024-12-13 13:19:41.859263786 +0000 UTC m=+0.346673939,LastTimestamp:2024-12-13 13:19:41.859263786 +0000 UTC m=+0.346673939,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.0.0.30,}" Dec 13 13:19:42.046483 kubelet[1807]: I1213 13:19:42.046123 1807 kubelet_node_status.go:73] "Attempting to register node" node="10.0.0.30" Dec 13 13:19:42.202670 kubelet[1807]: I1213 13:19:42.202611 1807 policy_none.go:49] "None policy: Start" Dec 13 13:19:42.203118 kubelet[1807]: I1213 13:19:42.203064 1807 kubelet_node_status.go:76] "Successfully registered node" node="10.0.0.30" Dec 13 13:19:42.203771 kubelet[1807]: I1213 13:19:42.203539 1807 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:19:42.203771 kubelet[1807]: I1213 13:19:42.203637 1807 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:19:42.206171 kubelet[1807]: I1213 13:19:42.206120 1807 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 13:19:42.206556 containerd[1497]: time="2024-12-13T13:19:42.206510728Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:19:42.207662 kubelet[1807]: I1213 13:19:42.207191 1807 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 13:19:42.213137 kubelet[1807]: E1213 13:19:42.212988 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:42.214248 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:19:42.225093 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:19:42.228320 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:19:42.234758 kubelet[1807]: I1213 13:19:42.234719 1807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:19:42.236195 kubelet[1807]: I1213 13:19:42.236167 1807 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:19:42.236239 kubelet[1807]: I1213 13:19:42.236219 1807 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:19:42.236275 kubelet[1807]: I1213 13:19:42.236242 1807 kubelet.go:2329] "Starting kubelet main sync loop" Dec 13 13:19:42.236431 kubelet[1807]: E1213 13:19:42.236372 1807 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Dec 13 13:19:42.237179 kubelet[1807]: I1213 13:19:42.237123 1807 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:19:42.237435 kubelet[1807]: I1213 13:19:42.237409 1807 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:19:42.240000 kubelet[1807]: E1213 13:19:42.239974 1807 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.30\" not found" Dec 13 13:19:42.313737 kubelet[1807]: E1213 13:19:42.313561 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:42.413912 kubelet[1807]: E1213 13:19:42.413813 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:42.514501 kubelet[1807]: E1213 13:19:42.514445 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:42.615131 kubelet[1807]: E1213 13:19:42.615004 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:42.715719 kubelet[1807]: E1213 13:19:42.715656 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:42.795338 kubelet[1807]: I1213 13:19:42.795283 1807 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 13:19:42.795503 kubelet[1807]: W1213 13:19:42.795473 1807 reflector.go:462] vendor/k8s.io/client-go/informers/factory.go:159: watch of *v1.RuntimeClass ended with: very short watch: vendor/k8s.io/client-go/informers/factory.go:159: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:19:42.816688 kubelet[1807]: E1213 13:19:42.816658 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:42.829920 kubelet[1807]: E1213 13:19:42.829867 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:42.917005 kubelet[1807]: E1213 13:19:42.916897 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:43.017507 kubelet[1807]: E1213 13:19:43.017459 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:43.020708 sudo[1672]: pam_unix(sudo:session): session closed for user root Dec 13 13:19:43.022160 sshd[1671]: Connection closed by 10.0.0.1 port 53368 Dec 13 13:19:43.022497 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Dec 13 13:19:43.026882 systemd[1]: sshd@6-10.0.0.30:22-10.0.0.1:53368.service: Deactivated successfully. Dec 13 13:19:43.029232 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:19:43.029985 systemd-logind[1479]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:19:43.030968 systemd-logind[1479]: Removed session 7. Dec 13 13:19:43.118419 kubelet[1807]: E1213 13:19:43.118360 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:43.219311 kubelet[1807]: E1213 13:19:43.219050 1807 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"10.0.0.30\" not found" Dec 13 13:19:43.830055 kubelet[1807]: E1213 13:19:43.829997 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:43.830055 kubelet[1807]: I1213 13:19:43.830056 1807 apiserver.go:52] "Watching apiserver" Dec 13 13:19:43.833228 kubelet[1807]: I1213 13:19:43.833195 1807 topology_manager.go:215] "Topology Admit Handler" podUID="990c8958-957f-45ce-a6e4-cb90db0bef4b" podNamespace="calico-system" podName="calico-node-gldxz" Dec 13 13:19:43.833352 kubelet[1807]: I1213 13:19:43.833328 1807 topology_manager.go:215] "Topology Admit Handler" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" podNamespace="calico-system" podName="csi-node-driver-7c2dz" Dec 13 13:19:43.833383 kubelet[1807]: I1213 13:19:43.833376 1807 topology_manager.go:215] "Topology Admit Handler" podUID="a279af69-ebbd-44fd-9589-0b9cf70b3e06" podNamespace="kube-system" podName="kube-proxy-l2cp2" Dec 13 13:19:43.833565 kubelet[1807]: E1213 13:19:43.833546 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:43.844251 systemd[1]: Created slice kubepods-besteffort-pod990c8958_957f_45ce_a6e4_cb90db0bef4b.slice - libcontainer container kubepods-besteffort-pod990c8958_957f_45ce_a6e4_cb90db0bef4b.slice. Dec 13 13:19:43.844987 kubelet[1807]: I1213 13:19:43.844750 1807 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Dec 13 13:19:43.857434 kubelet[1807]: I1213 13:19:43.857277 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-lib-modules\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857434 kubelet[1807]: I1213 13:19:43.857320 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-cni-bin-dir\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857434 kubelet[1807]: I1213 13:19:43.857350 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8q74t\" (UniqueName: \"kubernetes.io/projected/990c8958-957f-45ce-a6e4-cb90db0bef4b-kube-api-access-8q74t\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857434 kubelet[1807]: I1213 13:19:43.857368 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/aa5ad557-1e48-4494-9e6c-1e1c1985b57b-registration-dir\") pod \"csi-node-driver-7c2dz\" (UID: \"aa5ad557-1e48-4494-9e6c-1e1c1985b57b\") " pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:43.857434 kubelet[1807]: I1213 13:19:43.857385 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-xtables-lock\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857747 kubelet[1807]: I1213 13:19:43.857401 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-policysync\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857747 kubelet[1807]: I1213 13:19:43.857418 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/990c8958-957f-45ce-a6e4-cb90db0bef4b-tigera-ca-bundle\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857747 kubelet[1807]: I1213 13:19:43.857450 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a279af69-ebbd-44fd-9589-0b9cf70b3e06-kube-proxy\") pod \"kube-proxy-l2cp2\" (UID: \"a279af69-ebbd-44fd-9589-0b9cf70b3e06\") " pod="kube-system/kube-proxy-l2cp2" Dec 13 13:19:43.857747 kubelet[1807]: I1213 13:19:43.857470 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/990c8958-957f-45ce-a6e4-cb90db0bef4b-node-certs\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857747 kubelet[1807]: I1213 13:19:43.857489 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-var-run-calico\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857884 kubelet[1807]: I1213 13:19:43.857506 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-flexvol-driver-host\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.857884 kubelet[1807]: I1213 13:19:43.857525 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/aa5ad557-1e48-4494-9e6c-1e1c1985b57b-varrun\") pod \"csi-node-driver-7c2dz\" (UID: \"aa5ad557-1e48-4494-9e6c-1e1c1985b57b\") " pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:43.857884 kubelet[1807]: I1213 13:19:43.857542 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wwch7\" (UniqueName: \"kubernetes.io/projected/a279af69-ebbd-44fd-9589-0b9cf70b3e06-kube-api-access-wwch7\") pod \"kube-proxy-l2cp2\" (UID: \"a279af69-ebbd-44fd-9589-0b9cf70b3e06\") " pod="kube-system/kube-proxy-l2cp2" Dec 13 13:19:43.857884 kubelet[1807]: I1213 13:19:43.857562 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbq7x\" (UniqueName: \"kubernetes.io/projected/aa5ad557-1e48-4494-9e6c-1e1c1985b57b-kube-api-access-gbq7x\") pod \"csi-node-driver-7c2dz\" (UID: \"aa5ad557-1e48-4494-9e6c-1e1c1985b57b\") " pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:43.857884 kubelet[1807]: I1213 13:19:43.857593 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a279af69-ebbd-44fd-9589-0b9cf70b3e06-xtables-lock\") pod \"kube-proxy-l2cp2\" (UID: \"a279af69-ebbd-44fd-9589-0b9cf70b3e06\") " pod="kube-system/kube-proxy-l2cp2" Dec 13 13:19:43.858026 kubelet[1807]: I1213 13:19:43.857610 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a279af69-ebbd-44fd-9589-0b9cf70b3e06-lib-modules\") pod \"kube-proxy-l2cp2\" (UID: \"a279af69-ebbd-44fd-9589-0b9cf70b3e06\") " pod="kube-system/kube-proxy-l2cp2" Dec 13 13:19:43.858026 kubelet[1807]: I1213 13:19:43.857667 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-var-lib-calico\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.858026 kubelet[1807]: I1213 13:19:43.857779 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-cni-net-dir\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.858026 kubelet[1807]: I1213 13:19:43.857838 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/990c8958-957f-45ce-a6e4-cb90db0bef4b-cni-log-dir\") pod \"calico-node-gldxz\" (UID: \"990c8958-957f-45ce-a6e4-cb90db0bef4b\") " pod="calico-system/calico-node-gldxz" Dec 13 13:19:43.858026 kubelet[1807]: I1213 13:19:43.857868 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/aa5ad557-1e48-4494-9e6c-1e1c1985b57b-kubelet-dir\") pod \"csi-node-driver-7c2dz\" (UID: \"aa5ad557-1e48-4494-9e6c-1e1c1985b57b\") " pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:43.858152 kubelet[1807]: I1213 13:19:43.857900 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/aa5ad557-1e48-4494-9e6c-1e1c1985b57b-socket-dir\") pod \"csi-node-driver-7c2dz\" (UID: \"aa5ad557-1e48-4494-9e6c-1e1c1985b57b\") " pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:43.858240 systemd[1]: Created slice kubepods-besteffort-poda279af69_ebbd_44fd_9589_0b9cf70b3e06.slice - libcontainer container kubepods-besteffort-poda279af69_ebbd_44fd_9589_0b9cf70b3e06.slice. Dec 13 13:19:43.960283 kubelet[1807]: E1213 13:19:43.960247 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.960283 kubelet[1807]: W1213 13:19:43.960271 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.960442 kubelet[1807]: E1213 13:19:43.960309 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.960590 kubelet[1807]: E1213 13:19:43.960549 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.960590 kubelet[1807]: W1213 13:19:43.960564 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.960667 kubelet[1807]: E1213 13:19:43.960597 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.960975 kubelet[1807]: E1213 13:19:43.960939 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.960975 kubelet[1807]: W1213 13:19:43.960965 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.961092 kubelet[1807]: E1213 13:19:43.960996 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.961246 kubelet[1807]: E1213 13:19:43.961231 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.961246 kubelet[1807]: W1213 13:19:43.961244 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.961322 kubelet[1807]: E1213 13:19:43.961286 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.961479 kubelet[1807]: E1213 13:19:43.961467 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.961479 kubelet[1807]: W1213 13:19:43.961476 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.961558 kubelet[1807]: E1213 13:19:43.961514 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.961746 kubelet[1807]: E1213 13:19:43.961723 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.961746 kubelet[1807]: W1213 13:19:43.961743 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.961965 kubelet[1807]: E1213 13:19:43.961787 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.962112 kubelet[1807]: E1213 13:19:43.962093 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.962112 kubelet[1807]: W1213 13:19:43.962107 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.962202 kubelet[1807]: E1213 13:19:43.962189 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.962685 kubelet[1807]: E1213 13:19:43.962336 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.962685 kubelet[1807]: W1213 13:19:43.962346 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.962685 kubelet[1807]: E1213 13:19:43.962396 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.962685 kubelet[1807]: E1213 13:19:43.962543 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.962685 kubelet[1807]: W1213 13:19:43.962549 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.962685 kubelet[1807]: E1213 13:19:43.962644 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.962846 kubelet[1807]: E1213 13:19:43.962785 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.962846 kubelet[1807]: W1213 13:19:43.962792 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.962846 kubelet[1807]: E1213 13:19:43.962833 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.963025 kubelet[1807]: E1213 13:19:43.963010 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.963025 kubelet[1807]: W1213 13:19:43.963022 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.963141 kubelet[1807]: E1213 13:19:43.963108 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.963267 kubelet[1807]: E1213 13:19:43.963244 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.963267 kubelet[1807]: W1213 13:19:43.963255 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.963364 kubelet[1807]: E1213 13:19:43.963353 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.963598 kubelet[1807]: E1213 13:19:43.963481 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.963598 kubelet[1807]: W1213 13:19:43.963504 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.963598 kubelet[1807]: E1213 13:19:43.963528 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.963765 kubelet[1807]: E1213 13:19:43.963743 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.963765 kubelet[1807]: W1213 13:19:43.963754 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.963894 kubelet[1807]: E1213 13:19:43.963852 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.964045 kubelet[1807]: E1213 13:19:43.964030 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.964045 kubelet[1807]: W1213 13:19:43.964042 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.964163 kubelet[1807]: E1213 13:19:43.964150 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.964334 kubelet[1807]: E1213 13:19:43.964321 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.964465 kubelet[1807]: W1213 13:19:43.964394 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.964638 kubelet[1807]: E1213 13:19:43.964619 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.964783 kubelet[1807]: E1213 13:19:43.964697 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.964783 kubelet[1807]: W1213 13:19:43.964705 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.964783 kubelet[1807]: E1213 13:19:43.964745 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.965250 kubelet[1807]: E1213 13:19:43.965205 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.965361 kubelet[1807]: W1213 13:19:43.965327 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.965508 kubelet[1807]: E1213 13:19:43.965496 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.966100 kubelet[1807]: E1213 13:19:43.966085 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.968483 kubelet[1807]: W1213 13:19:43.967389 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.968483 kubelet[1807]: E1213 13:19:43.968253 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.968483 kubelet[1807]: E1213 13:19:43.968405 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.968483 kubelet[1807]: W1213 13:19:43.968412 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.968616 kubelet[1807]: E1213 13:19:43.968493 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.968679 kubelet[1807]: E1213 13:19:43.968663 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.968679 kubelet[1807]: W1213 13:19:43.968677 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.968773 kubelet[1807]: E1213 13:19:43.968756 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.969310 kubelet[1807]: E1213 13:19:43.969293 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.969398 kubelet[1807]: W1213 13:19:43.969384 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.969538 kubelet[1807]: E1213 13:19:43.969525 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.969783 kubelet[1807]: E1213 13:19:43.969768 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.969783 kubelet[1807]: W1213 13:19:43.969779 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.969896 kubelet[1807]: E1213 13:19:43.969862 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.970127 kubelet[1807]: E1213 13:19:43.970007 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.970127 kubelet[1807]: W1213 13:19:43.970029 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.970127 kubelet[1807]: E1213 13:19:43.970053 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.970262 kubelet[1807]: E1213 13:19:43.970248 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.970262 kubelet[1807]: W1213 13:19:43.970259 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.970360 kubelet[1807]: E1213 13:19:43.970347 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.971320 kubelet[1807]: E1213 13:19:43.970566 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.971320 kubelet[1807]: W1213 13:19:43.970610 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.971320 kubelet[1807]: E1213 13:19:43.970684 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.971320 kubelet[1807]: E1213 13:19:43.970874 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.971320 kubelet[1807]: W1213 13:19:43.970881 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.971320 kubelet[1807]: E1213 13:19:43.971084 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.971320 kubelet[1807]: W1213 13:19:43.971091 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.971320 kubelet[1807]: E1213 13:19:43.971268 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.971320 kubelet[1807]: W1213 13:19:43.971275 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.971516 kubelet[1807]: E1213 13:19:43.971499 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.971516 kubelet[1807]: W1213 13:19:43.971507 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.971830 kubelet[1807]: E1213 13:19:43.971720 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.971830 kubelet[1807]: W1213 13:19:43.971737 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.971830 kubelet[1807]: E1213 13:19:43.971749 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.971943 kubelet[1807]: E1213 13:19:43.971928 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.972002 kubelet[1807]: E1213 13:19:43.971992 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.972070 kubelet[1807]: E1213 13:19:43.972059 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.972135 kubelet[1807]: E1213 13:19:43.972120 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:43.972710 kubelet[1807]: E1213 13:19:43.972696 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:43.972710 kubelet[1807]: W1213 13:19:43.972707 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:43.972776 kubelet[1807]: E1213 13:19:43.972718 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:44.156553 kubelet[1807]: E1213 13:19:44.156403 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:44.157332 containerd[1497]: time="2024-12-13T13:19:44.157260991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gldxz,Uid:990c8958-957f-45ce-a6e4-cb90db0bef4b,Namespace:calico-system,Attempt:0,}" Dec 13 13:19:44.160359 kubelet[1807]: E1213 13:19:44.160332 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:44.160731 containerd[1497]: time="2024-12-13T13:19:44.160694360Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l2cp2,Uid:a279af69-ebbd-44fd-9589-0b9cf70b3e06,Namespace:kube-system,Attempt:0,}" Dec 13 13:19:44.790684 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount851611354.mount: Deactivated successfully. Dec 13 13:19:44.799017 containerd[1497]: time="2024-12-13T13:19:44.798955377Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:19:44.800851 containerd[1497]: time="2024-12-13T13:19:44.800798594Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Dec 13 13:19:44.802012 containerd[1497]: time="2024-12-13T13:19:44.801973176Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:19:44.802937 containerd[1497]: time="2024-12-13T13:19:44.802906547Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:19:44.803661 containerd[1497]: time="2024-12-13T13:19:44.803631396Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:19:44.805297 containerd[1497]: time="2024-12-13T13:19:44.805262264Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:19:44.807195 containerd[1497]: time="2024-12-13T13:19:44.807167177Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 646.363602ms" Dec 13 13:19:44.807865 containerd[1497]: time="2024-12-13T13:19:44.807827826Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 650.398099ms" Dec 13 13:19:44.830791 kubelet[1807]: E1213 13:19:44.830751 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:44.964422 containerd[1497]: time="2024-12-13T13:19:44.964166136Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:19:44.964422 containerd[1497]: time="2024-12-13T13:19:44.964225066Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:19:44.964422 containerd[1497]: time="2024-12-13T13:19:44.964239343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:44.964422 containerd[1497]: time="2024-12-13T13:19:44.964328440Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:44.969920 containerd[1497]: time="2024-12-13T13:19:44.969804981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:19:44.969920 containerd[1497]: time="2024-12-13T13:19:44.969890050Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:19:44.970107 containerd[1497]: time="2024-12-13T13:19:44.969907964Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:44.970243 containerd[1497]: time="2024-12-13T13:19:44.970195122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:19:45.063826 systemd[1]: Started cri-containerd-5f8463a16db1433d198ec07aa68fc90882160c909113d2c3fd26e093014238d7.scope - libcontainer container 5f8463a16db1433d198ec07aa68fc90882160c909113d2c3fd26e093014238d7. Dec 13 13:19:45.069299 systemd[1]: Started cri-containerd-f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0.scope - libcontainer container f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0. Dec 13 13:19:45.094703 containerd[1497]: time="2024-12-13T13:19:45.094648337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-l2cp2,Uid:a279af69-ebbd-44fd-9589-0b9cf70b3e06,Namespace:kube-system,Attempt:0,} returns sandbox id \"5f8463a16db1433d198ec07aa68fc90882160c909113d2c3fd26e093014238d7\"" Dec 13 13:19:45.096288 kubelet[1807]: E1213 13:19:45.096255 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:45.098847 containerd[1497]: time="2024-12-13T13:19:45.098792018Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\"" Dec 13 13:19:45.105947 containerd[1497]: time="2024-12-13T13:19:45.105902973Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gldxz,Uid:990c8958-957f-45ce-a6e4-cb90db0bef4b,Namespace:calico-system,Attempt:0,} returns sandbox id \"f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0\"" Dec 13 13:19:45.107055 kubelet[1807]: E1213 13:19:45.107024 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:45.237514 kubelet[1807]: E1213 13:19:45.237443 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:45.831761 kubelet[1807]: E1213 13:19:45.831693 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:46.404360 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3412384803.mount: Deactivated successfully. Dec 13 13:19:46.773958 containerd[1497]: time="2024-12-13T13:19:46.773811176Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:46.774735 containerd[1497]: time="2024-12-13T13:19:46.774701847Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.12: active requests=0, bytes read=28619958" Dec 13 13:19:46.775971 containerd[1497]: time="2024-12-13T13:19:46.775933817Z" level=info msg="ImageCreate event name:\"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:46.780085 containerd[1497]: time="2024-12-13T13:19:46.780030520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:46.781055 containerd[1497]: time="2024-12-13T13:19:46.781017902Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.12\" with image id \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\", repo tag \"registry.k8s.io/kube-proxy:v1.29.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:bc761494b78fa152a759457f42bc9b86ee9d18f5929bb127bd5f72f8e2112c39\", size \"28618977\" in 1.682184666s" Dec 13 13:19:46.781055 containerd[1497]: time="2024-12-13T13:19:46.781052567Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.12\" returns image reference \"sha256:d699d5830022f9e67c3271d1c2af58eaede81e3567df82728b7d2a8bf12ed153\"" Dec 13 13:19:46.781769 containerd[1497]: time="2024-12-13T13:19:46.781741509Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:19:46.783255 containerd[1497]: time="2024-12-13T13:19:46.783227365Z" level=info msg="CreateContainer within sandbox \"5f8463a16db1433d198ec07aa68fc90882160c909113d2c3fd26e093014238d7\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:19:46.800252 containerd[1497]: time="2024-12-13T13:19:46.800204383Z" level=info msg="CreateContainer within sandbox \"5f8463a16db1433d198ec07aa68fc90882160c909113d2c3fd26e093014238d7\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e22ad1be76802790fc24ccfe8295fba41122dee6093fc34f590ac16dbaeb82ae\"" Dec 13 13:19:46.800882 containerd[1497]: time="2024-12-13T13:19:46.800848862Z" level=info msg="StartContainer for \"e22ad1be76802790fc24ccfe8295fba41122dee6093fc34f590ac16dbaeb82ae\"" Dec 13 13:19:46.832764 kubelet[1807]: E1213 13:19:46.832697 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:46.959740 systemd[1]: Started cri-containerd-e22ad1be76802790fc24ccfe8295fba41122dee6093fc34f590ac16dbaeb82ae.scope - libcontainer container e22ad1be76802790fc24ccfe8295fba41122dee6093fc34f590ac16dbaeb82ae. Dec 13 13:19:46.995512 containerd[1497]: time="2024-12-13T13:19:46.995451481Z" level=info msg="StartContainer for \"e22ad1be76802790fc24ccfe8295fba41122dee6093fc34f590ac16dbaeb82ae\" returns successfully" Dec 13 13:19:47.236750 kubelet[1807]: E1213 13:19:47.236542 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:47.248389 kubelet[1807]: E1213 13:19:47.248361 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:47.259141 kubelet[1807]: I1213 13:19:47.259093 1807 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-l2cp2" podStartSLOduration=3.575553316 podStartE2EDuration="5.259002323s" podCreationTimestamp="2024-12-13 13:19:42 +0000 UTC" firstStartedPulling="2024-12-13 13:19:45.098044926 +0000 UTC m=+3.585455080" lastFinishedPulling="2024-12-13 13:19:46.781493924 +0000 UTC m=+5.268904087" observedRunningTime="2024-12-13 13:19:47.258926591 +0000 UTC m=+5.746336744" watchObservedRunningTime="2024-12-13 13:19:47.259002323 +0000 UTC m=+5.746412476" Dec 13 13:19:47.269425 kubelet[1807]: E1213 13:19:47.269380 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.269425 kubelet[1807]: W1213 13:19:47.269408 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.269540 kubelet[1807]: E1213 13:19:47.269433 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.269677 kubelet[1807]: E1213 13:19:47.269655 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.269677 kubelet[1807]: W1213 13:19:47.269669 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.269731 kubelet[1807]: E1213 13:19:47.269685 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.269920 kubelet[1807]: E1213 13:19:47.269899 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.269920 kubelet[1807]: W1213 13:19:47.269912 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.269987 kubelet[1807]: E1213 13:19:47.269925 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.270155 kubelet[1807]: E1213 13:19:47.270142 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.270155 kubelet[1807]: W1213 13:19:47.270154 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.270212 kubelet[1807]: E1213 13:19:47.270166 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.270388 kubelet[1807]: E1213 13:19:47.270373 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.270388 kubelet[1807]: W1213 13:19:47.270385 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.270437 kubelet[1807]: E1213 13:19:47.270401 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.270621 kubelet[1807]: E1213 13:19:47.270608 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.270655 kubelet[1807]: W1213 13:19:47.270619 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.270655 kubelet[1807]: E1213 13:19:47.270631 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.270832 kubelet[1807]: E1213 13:19:47.270819 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.270855 kubelet[1807]: W1213 13:19:47.270830 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.270855 kubelet[1807]: E1213 13:19:47.270844 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.271026 kubelet[1807]: E1213 13:19:47.271013 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.271048 kubelet[1807]: W1213 13:19:47.271026 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.271048 kubelet[1807]: E1213 13:19:47.271038 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.271244 kubelet[1807]: E1213 13:19:47.271232 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.271267 kubelet[1807]: W1213 13:19:47.271244 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.271267 kubelet[1807]: E1213 13:19:47.271256 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.271455 kubelet[1807]: E1213 13:19:47.271443 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.271477 kubelet[1807]: W1213 13:19:47.271454 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.271477 kubelet[1807]: E1213 13:19:47.271465 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.271734 kubelet[1807]: E1213 13:19:47.271716 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.271734 kubelet[1807]: W1213 13:19:47.271729 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.271815 kubelet[1807]: E1213 13:19:47.271742 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.271968 kubelet[1807]: E1213 13:19:47.271948 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.271968 kubelet[1807]: W1213 13:19:47.271957 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.271968 kubelet[1807]: E1213 13:19:47.271967 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.272153 kubelet[1807]: E1213 13:19:47.272140 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.272153 kubelet[1807]: W1213 13:19:47.272149 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.272225 kubelet[1807]: E1213 13:19:47.272158 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.272342 kubelet[1807]: E1213 13:19:47.272329 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.272342 kubelet[1807]: W1213 13:19:47.272338 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.272409 kubelet[1807]: E1213 13:19:47.272347 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.272541 kubelet[1807]: E1213 13:19:47.272524 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.272541 kubelet[1807]: W1213 13:19:47.272538 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.272618 kubelet[1807]: E1213 13:19:47.272552 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.272804 kubelet[1807]: E1213 13:19:47.272789 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.272804 kubelet[1807]: W1213 13:19:47.272800 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.272857 kubelet[1807]: E1213 13:19:47.272811 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.273014 kubelet[1807]: E1213 13:19:47.273002 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.273014 kubelet[1807]: W1213 13:19:47.273011 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.273058 kubelet[1807]: E1213 13:19:47.273023 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.273203 kubelet[1807]: E1213 13:19:47.273192 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.273203 kubelet[1807]: W1213 13:19:47.273200 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.273241 kubelet[1807]: E1213 13:19:47.273210 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.273394 kubelet[1807]: E1213 13:19:47.273380 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.273394 kubelet[1807]: W1213 13:19:47.273389 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.273446 kubelet[1807]: E1213 13:19:47.273399 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.273602 kubelet[1807]: E1213 13:19:47.273584 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.273602 kubelet[1807]: W1213 13:19:47.273594 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.273602 kubelet[1807]: E1213 13:19:47.273604 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.341263 kubelet[1807]: E1213 13:19:47.341234 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.341263 kubelet[1807]: W1213 13:19:47.341254 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.341393 kubelet[1807]: E1213 13:19:47.341277 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.341606 kubelet[1807]: E1213 13:19:47.341585 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.341606 kubelet[1807]: W1213 13:19:47.341601 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.341704 kubelet[1807]: E1213 13:19:47.341623 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.341894 kubelet[1807]: E1213 13:19:47.341877 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.341894 kubelet[1807]: W1213 13:19:47.341889 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.341970 kubelet[1807]: E1213 13:19:47.341911 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.342175 kubelet[1807]: E1213 13:19:47.342146 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.342175 kubelet[1807]: W1213 13:19:47.342159 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.342238 kubelet[1807]: E1213 13:19:47.342183 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.342402 kubelet[1807]: E1213 13:19:47.342388 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.342431 kubelet[1807]: W1213 13:19:47.342401 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.342431 kubelet[1807]: E1213 13:19:47.342420 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.342704 kubelet[1807]: E1213 13:19:47.342688 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.342704 kubelet[1807]: W1213 13:19:47.342701 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.342768 kubelet[1807]: E1213 13:19:47.342737 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.342932 kubelet[1807]: E1213 13:19:47.342912 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.342932 kubelet[1807]: W1213 13:19:47.342926 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.343011 kubelet[1807]: E1213 13:19:47.342938 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.343128 kubelet[1807]: E1213 13:19:47.343114 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.343128 kubelet[1807]: W1213 13:19:47.343123 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.343188 kubelet[1807]: E1213 13:19:47.343137 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.343340 kubelet[1807]: E1213 13:19:47.343321 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.343340 kubelet[1807]: W1213 13:19:47.343339 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.343409 kubelet[1807]: E1213 13:19:47.343364 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.343737 kubelet[1807]: E1213 13:19:47.343611 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.343737 kubelet[1807]: W1213 13:19:47.343627 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.343737 kubelet[1807]: E1213 13:19:47.343655 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.343968 kubelet[1807]: E1213 13:19:47.343953 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.343968 kubelet[1807]: W1213 13:19:47.343966 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.344019 kubelet[1807]: E1213 13:19:47.343986 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.344225 kubelet[1807]: E1213 13:19:47.344210 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:47.344225 kubelet[1807]: W1213 13:19:47.344224 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:47.344273 kubelet[1807]: E1213 13:19:47.344238 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:47.833090 kubelet[1807]: E1213 13:19:47.833033 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:48.249627 kubelet[1807]: E1213 13:19:48.249460 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:48.280592 kubelet[1807]: E1213 13:19:48.280536 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.280592 kubelet[1807]: W1213 13:19:48.280556 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.280758 kubelet[1807]: E1213 13:19:48.280619 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.280911 kubelet[1807]: E1213 13:19:48.280891 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.280911 kubelet[1807]: W1213 13:19:48.280904 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.280911 kubelet[1807]: E1213 13:19:48.280916 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.281193 kubelet[1807]: E1213 13:19:48.281166 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.281193 kubelet[1807]: W1213 13:19:48.281179 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.281193 kubelet[1807]: E1213 13:19:48.281191 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.281435 kubelet[1807]: E1213 13:19:48.281416 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.281435 kubelet[1807]: W1213 13:19:48.281426 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.281435 kubelet[1807]: E1213 13:19:48.281436 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.281696 kubelet[1807]: E1213 13:19:48.281670 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.281696 kubelet[1807]: W1213 13:19:48.281682 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.281696 kubelet[1807]: E1213 13:19:48.281692 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.281904 kubelet[1807]: E1213 13:19:48.281887 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.281904 kubelet[1807]: W1213 13:19:48.281897 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.281964 kubelet[1807]: E1213 13:19:48.281907 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.282108 kubelet[1807]: E1213 13:19:48.282088 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.282108 kubelet[1807]: W1213 13:19:48.282098 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.282108 kubelet[1807]: E1213 13:19:48.282109 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.282354 kubelet[1807]: E1213 13:19:48.282336 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.282354 kubelet[1807]: W1213 13:19:48.282348 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.282426 kubelet[1807]: E1213 13:19:48.282359 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.282585 kubelet[1807]: E1213 13:19:48.282558 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.282585 kubelet[1807]: W1213 13:19:48.282583 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.282636 kubelet[1807]: E1213 13:19:48.282593 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.282811 kubelet[1807]: E1213 13:19:48.282797 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.282811 kubelet[1807]: W1213 13:19:48.282807 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.282863 kubelet[1807]: E1213 13:19:48.282818 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.283022 kubelet[1807]: E1213 13:19:48.283008 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.283022 kubelet[1807]: W1213 13:19:48.283017 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.283068 kubelet[1807]: E1213 13:19:48.283029 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.283226 kubelet[1807]: E1213 13:19:48.283213 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.283226 kubelet[1807]: W1213 13:19:48.283223 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.283283 kubelet[1807]: E1213 13:19:48.283233 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.283432 kubelet[1807]: E1213 13:19:48.283419 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.283432 kubelet[1807]: W1213 13:19:48.283428 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.283539 kubelet[1807]: E1213 13:19:48.283438 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.283657 kubelet[1807]: E1213 13:19:48.283643 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.283657 kubelet[1807]: W1213 13:19:48.283653 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.283708 kubelet[1807]: E1213 13:19:48.283662 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.283857 kubelet[1807]: E1213 13:19:48.283842 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.283857 kubelet[1807]: W1213 13:19:48.283853 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.283969 kubelet[1807]: E1213 13:19:48.283862 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.284054 kubelet[1807]: E1213 13:19:48.284040 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.284054 kubelet[1807]: W1213 13:19:48.284050 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.284100 kubelet[1807]: E1213 13:19:48.284061 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.284248 kubelet[1807]: E1213 13:19:48.284235 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.284248 kubelet[1807]: W1213 13:19:48.284245 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.284305 kubelet[1807]: E1213 13:19:48.284255 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.284443 kubelet[1807]: E1213 13:19:48.284429 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.284443 kubelet[1807]: W1213 13:19:48.284439 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.284493 kubelet[1807]: E1213 13:19:48.284449 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.284662 kubelet[1807]: E1213 13:19:48.284648 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.284662 kubelet[1807]: W1213 13:19:48.284658 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.284720 kubelet[1807]: E1213 13:19:48.284670 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.284879 kubelet[1807]: E1213 13:19:48.284865 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.284879 kubelet[1807]: W1213 13:19:48.284876 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.284928 kubelet[1807]: E1213 13:19:48.284887 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.348286 kubelet[1807]: E1213 13:19:48.348229 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.348286 kubelet[1807]: W1213 13:19:48.348256 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.348286 kubelet[1807]: E1213 13:19:48.348282 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.348613 kubelet[1807]: E1213 13:19:48.348587 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.348613 kubelet[1807]: W1213 13:19:48.348599 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.348692 kubelet[1807]: E1213 13:19:48.348619 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.348840 kubelet[1807]: E1213 13:19:48.348820 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.348840 kubelet[1807]: W1213 13:19:48.348834 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.349144 kubelet[1807]: E1213 13:19:48.348851 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.349144 kubelet[1807]: E1213 13:19:48.349065 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.349144 kubelet[1807]: W1213 13:19:48.349076 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.349144 kubelet[1807]: E1213 13:19:48.349094 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.349350 kubelet[1807]: E1213 13:19:48.349321 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.349350 kubelet[1807]: W1213 13:19:48.349336 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.349428 kubelet[1807]: E1213 13:19:48.349354 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.349623 kubelet[1807]: E1213 13:19:48.349596 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.349623 kubelet[1807]: W1213 13:19:48.349620 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.349699 kubelet[1807]: E1213 13:19:48.349639 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.349906 kubelet[1807]: E1213 13:19:48.349887 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.349906 kubelet[1807]: W1213 13:19:48.349900 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.349906 kubelet[1807]: E1213 13:19:48.349919 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.350135 kubelet[1807]: E1213 13:19:48.350118 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.350135 kubelet[1807]: W1213 13:19:48.350131 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.350190 kubelet[1807]: E1213 13:19:48.350148 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.350387 kubelet[1807]: E1213 13:19:48.350367 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.350387 kubelet[1807]: W1213 13:19:48.350379 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.350387 kubelet[1807]: E1213 13:19:48.350390 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.350615 kubelet[1807]: E1213 13:19:48.350590 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.350615 kubelet[1807]: W1213 13:19:48.350601 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.350615 kubelet[1807]: E1213 13:19:48.350616 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.350858 kubelet[1807]: E1213 13:19:48.350831 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.350858 kubelet[1807]: W1213 13:19:48.350843 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.350858 kubelet[1807]: E1213 13:19:48.350853 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.351212 kubelet[1807]: E1213 13:19:48.351187 1807 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:19:48.351212 kubelet[1807]: W1213 13:19:48.351200 1807 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:19:48.351212 kubelet[1807]: E1213 13:19:48.351211 1807 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:19:48.395522 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3610745627.mount: Deactivated successfully. Dec 13 13:19:48.472415 containerd[1497]: time="2024-12-13T13:19:48.472360953Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:48.473466 containerd[1497]: time="2024-12-13T13:19:48.473402627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 13:19:48.474446 containerd[1497]: time="2024-12-13T13:19:48.474416068Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:48.476580 containerd[1497]: time="2024-12-13T13:19:48.476544860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:48.477183 containerd[1497]: time="2024-12-13T13:19:48.477145727Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.695373791s" Dec 13 13:19:48.477231 containerd[1497]: time="2024-12-13T13:19:48.477181243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:19:48.478488 containerd[1497]: time="2024-12-13T13:19:48.478453740Z" level=info msg="CreateContainer within sandbox \"f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:19:48.499073 containerd[1497]: time="2024-12-13T13:19:48.499008387Z" level=info msg="CreateContainer within sandbox \"f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db\"" Dec 13 13:19:48.499607 containerd[1497]: time="2024-12-13T13:19:48.499548770Z" level=info msg="StartContainer for \"683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db\"" Dec 13 13:19:48.633726 systemd[1]: Started cri-containerd-683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db.scope - libcontainer container 683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db. Dec 13 13:19:48.685957 containerd[1497]: time="2024-12-13T13:19:48.685904865Z" level=info msg="StartContainer for \"683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db\" returns successfully" Dec 13 13:19:48.698611 systemd[1]: cri-containerd-683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db.scope: Deactivated successfully. Dec 13 13:19:48.833669 kubelet[1807]: E1213 13:19:48.833545 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:49.237252 kubelet[1807]: E1213 13:19:49.237211 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:49.252375 kubelet[1807]: E1213 13:19:49.252337 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:49.332379 containerd[1497]: time="2024-12-13T13:19:49.332299605Z" level=info msg="shim disconnected" id=683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db namespace=k8s.io Dec 13 13:19:49.332379 containerd[1497]: time="2024-12-13T13:19:49.332367312Z" level=warning msg="cleaning up after shim disconnected" id=683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db namespace=k8s.io Dec 13 13:19:49.332379 containerd[1497]: time="2024-12-13T13:19:49.332376540Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:19:49.373268 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-683a1599faff88c1bcf06bdefcb8662dbdeff769721d11992dedfb4c0d0e82db-rootfs.mount: Deactivated successfully. Dec 13 13:19:49.834729 kubelet[1807]: E1213 13:19:49.834656 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:50.254896 kubelet[1807]: E1213 13:19:50.254751 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:50.255418 containerd[1497]: time="2024-12-13T13:19:50.255376484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:19:50.835282 kubelet[1807]: E1213 13:19:50.835219 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:51.237092 kubelet[1807]: E1213 13:19:51.236931 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:51.836481 kubelet[1807]: E1213 13:19:51.836421 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:52.837444 kubelet[1807]: E1213 13:19:52.837387 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:53.237822 kubelet[1807]: E1213 13:19:53.237417 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:53.838631 kubelet[1807]: E1213 13:19:53.838553 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:53.952565 containerd[1497]: time="2024-12-13T13:19:53.952505140Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:53.953221 containerd[1497]: time="2024-12-13T13:19:53.953151522Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:19:53.954254 containerd[1497]: time="2024-12-13T13:19:53.954222581Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:53.956279 containerd[1497]: time="2024-12-13T13:19:53.956243050Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:19:53.957075 containerd[1497]: time="2024-12-13T13:19:53.957037390Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 3.70162629s" Dec 13 13:19:53.957123 containerd[1497]: time="2024-12-13T13:19:53.957074780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:19:53.958582 containerd[1497]: time="2024-12-13T13:19:53.958538194Z" level=info msg="CreateContainer within sandbox \"f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:19:53.973811 containerd[1497]: time="2024-12-13T13:19:53.973777313Z" level=info msg="CreateContainer within sandbox \"f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105\"" Dec 13 13:19:53.974253 containerd[1497]: time="2024-12-13T13:19:53.974209033Z" level=info msg="StartContainer for \"24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105\"" Dec 13 13:19:54.060702 systemd[1]: Started cri-containerd-24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105.scope - libcontainer container 24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105. Dec 13 13:19:54.102890 containerd[1497]: time="2024-12-13T13:19:54.102531242Z" level=info msg="StartContainer for \"24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105\" returns successfully" Dec 13 13:19:54.262425 kubelet[1807]: E1213 13:19:54.262372 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:54.838768 kubelet[1807]: E1213 13:19:54.838714 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:55.237143 kubelet[1807]: E1213 13:19:55.236982 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:55.263881 kubelet[1807]: E1213 13:19:55.263842 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:55.839193 kubelet[1807]: E1213 13:19:55.839131 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:56.050434 systemd[1]: cri-containerd-24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105.scope: Deactivated successfully. Dec 13 13:19:56.050788 systemd[1]: cri-containerd-24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105.scope: Consumed 1.102s CPU time. Dec 13 13:19:56.063227 kubelet[1807]: I1213 13:19:56.063192 1807 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:19:56.073342 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105-rootfs.mount: Deactivated successfully. Dec 13 13:19:56.416758 containerd[1497]: time="2024-12-13T13:19:56.416677663Z" level=info msg="shim disconnected" id=24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105 namespace=k8s.io Dec 13 13:19:56.416758 containerd[1497]: time="2024-12-13T13:19:56.416747554Z" level=warning msg="cleaning up after shim disconnected" id=24cf0dedd1bec6fce199ceb380b5a1eaa793654f87605fd22df4506774f56105 namespace=k8s.io Dec 13 13:19:56.416758 containerd[1497]: time="2024-12-13T13:19:56.416758785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:19:56.839978 kubelet[1807]: E1213 13:19:56.839816 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:57.243221 systemd[1]: Created slice kubepods-besteffort-podaa5ad557_1e48_4494_9e6c_1e1c1985b57b.slice - libcontainer container kubepods-besteffort-podaa5ad557_1e48_4494_9e6c_1e1c1985b57b.slice. Dec 13 13:19:57.245421 containerd[1497]: time="2024-12-13T13:19:57.245368815Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:0,}" Dec 13 13:19:57.269042 kubelet[1807]: E1213 13:19:57.268954 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:19:57.269732 containerd[1497]: time="2024-12-13T13:19:57.269691649Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:19:57.317399 containerd[1497]: time="2024-12-13T13:19:57.317332252Z" level=error msg="Failed to destroy network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:57.317858 containerd[1497]: time="2024-12-13T13:19:57.317823072Z" level=error msg="encountered an error cleaning up failed sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:57.317942 containerd[1497]: time="2024-12-13T13:19:57.317904685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:57.318290 kubelet[1807]: E1213 13:19:57.318227 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:57.318521 kubelet[1807]: E1213 13:19:57.318316 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:57.318521 kubelet[1807]: E1213 13:19:57.318351 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:57.318521 kubelet[1807]: E1213 13:19:57.318443 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:57.319484 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9-shm.mount: Deactivated successfully. Dec 13 13:19:57.840377 kubelet[1807]: E1213 13:19:57.840277 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:58.270501 kubelet[1807]: I1213 13:19:58.270350 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9" Dec 13 13:19:58.271068 containerd[1497]: time="2024-12-13T13:19:58.271030116Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:19:58.271545 containerd[1497]: time="2024-12-13T13:19:58.271318647Z" level=info msg="Ensure that sandbox 5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9 in task-service has been cleanup successfully" Dec 13 13:19:58.271626 containerd[1497]: time="2024-12-13T13:19:58.271553267Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:19:58.271626 containerd[1497]: time="2024-12-13T13:19:58.271594965Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:19:58.272303 containerd[1497]: time="2024-12-13T13:19:58.272050539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:1,}" Dec 13 13:19:58.273210 systemd[1]: run-netns-cni\x2d1110fe62\x2d5138\x2dfb1c\x2dceac\x2d6f3b512bd3e8.mount: Deactivated successfully. Dec 13 13:19:58.545815 containerd[1497]: time="2024-12-13T13:19:58.545685052Z" level=error msg="Failed to destroy network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:58.546401 containerd[1497]: time="2024-12-13T13:19:58.546093658Z" level=error msg="encountered an error cleaning up failed sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:58.546401 containerd[1497]: time="2024-12-13T13:19:58.546158380Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:58.546480 kubelet[1807]: E1213 13:19:58.546459 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:58.546598 kubelet[1807]: E1213 13:19:58.546524 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:58.546710 kubelet[1807]: E1213 13:19:58.546603 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:58.546710 kubelet[1807]: E1213 13:19:58.546703 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:58.547760 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58-shm.mount: Deactivated successfully. Dec 13 13:19:58.841103 kubelet[1807]: E1213 13:19:58.840925 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:19:59.274453 kubelet[1807]: I1213 13:19:59.274241 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58" Dec 13 13:19:59.275471 containerd[1497]: time="2024-12-13T13:19:59.274938613Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:19:59.275471 containerd[1497]: time="2024-12-13T13:19:59.275211975Z" level=info msg="Ensure that sandbox b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58 in task-service has been cleanup successfully" Dec 13 13:19:59.275997 containerd[1497]: time="2024-12-13T13:19:59.275914463Z" level=info msg="TearDown network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" successfully" Dec 13 13:19:59.275997 containerd[1497]: time="2024-12-13T13:19:59.275932396Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" returns successfully" Dec 13 13:19:59.276915 containerd[1497]: time="2024-12-13T13:19:59.276890162Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:19:59.277183 containerd[1497]: time="2024-12-13T13:19:59.277113511Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:19:59.277183 containerd[1497]: time="2024-12-13T13:19:59.277127247Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:19:59.277230 systemd[1]: run-netns-cni\x2d90782e1d\x2d4217\x2d00f1\x2db61b\x2d6c03839f1f60.mount: Deactivated successfully. Dec 13 13:19:59.279881 containerd[1497]: time="2024-12-13T13:19:59.279806221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:2,}" Dec 13 13:19:59.328677 kubelet[1807]: I1213 13:19:59.328610 1807 topology_manager.go:215] "Topology Admit Handler" podUID="7c3df8b3-1d55-49bf-8874-f308a80be555" podNamespace="default" podName="nginx-deployment-6d5f899847-vxqjs" Dec 13 13:19:59.337088 systemd[1]: Created slice kubepods-besteffort-pod7c3df8b3_1d55_49bf_8874_f308a80be555.slice - libcontainer container kubepods-besteffort-pod7c3df8b3_1d55_49bf_8874_f308a80be555.slice. Dec 13 13:19:59.403171 kubelet[1807]: I1213 13:19:59.402996 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-njrss\" (UniqueName: \"kubernetes.io/projected/7c3df8b3-1d55-49bf-8874-f308a80be555-kube-api-access-njrss\") pod \"nginx-deployment-6d5f899847-vxqjs\" (UID: \"7c3df8b3-1d55-49bf-8874-f308a80be555\") " pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:19:59.549420 containerd[1497]: time="2024-12-13T13:19:59.547765871Z" level=error msg="Failed to destroy network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:59.549420 containerd[1497]: time="2024-12-13T13:19:59.548206317Z" level=error msg="encountered an error cleaning up failed sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:59.549420 containerd[1497]: time="2024-12-13T13:19:59.548264386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:59.549731 kubelet[1807]: E1213 13:19:59.548537 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:59.549731 kubelet[1807]: E1213 13:19:59.548611 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:59.549731 kubelet[1807]: E1213 13:19:59.548632 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:19:59.549901 kubelet[1807]: E1213 13:19:59.548685 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:19:59.552232 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2-shm.mount: Deactivated successfully. Dec 13 13:19:59.641969 containerd[1497]: time="2024-12-13T13:19:59.641909334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:0,}" Dec 13 13:19:59.709362 containerd[1497]: time="2024-12-13T13:19:59.709297951Z" level=error msg="Failed to destroy network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:59.709751 containerd[1497]: time="2024-12-13T13:19:59.709718430Z" level=error msg="encountered an error cleaning up failed sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:59.709810 containerd[1497]: time="2024-12-13T13:19:59.709786297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:59.710085 kubelet[1807]: E1213 13:19:59.710027 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:19:59.710085 kubelet[1807]: E1213 13:19:59.710103 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:19:59.710296 kubelet[1807]: E1213 13:19:59.710132 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:19:59.710296 kubelet[1807]: E1213 13:19:59.710202 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxqjs" podUID="7c3df8b3-1d55-49bf-8874-f308a80be555" Dec 13 13:19:59.842016 kubelet[1807]: E1213 13:19:59.841839 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:00.279521 kubelet[1807]: I1213 13:20:00.279410 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2" Dec 13 13:20:00.280489 containerd[1497]: time="2024-12-13T13:20:00.280167596Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" Dec 13 13:20:00.280489 containerd[1497]: time="2024-12-13T13:20:00.280411263Z" level=info msg="Ensure that sandbox fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2 in task-service has been cleanup successfully" Dec 13 13:20:00.281267 containerd[1497]: time="2024-12-13T13:20:00.281156150Z" level=info msg="TearDown network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" successfully" Dec 13 13:20:00.281267 containerd[1497]: time="2024-12-13T13:20:00.281184012Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" returns successfully" Dec 13 13:20:00.281454 kubelet[1807]: I1213 13:20:00.281428 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223" Dec 13 13:20:00.281928 containerd[1497]: time="2024-12-13T13:20:00.281864558Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" Dec 13 13:20:00.282030 containerd[1497]: time="2024-12-13T13:20:00.281905886Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:20:00.282147 containerd[1497]: time="2024-12-13T13:20:00.282098988Z" level=info msg="TearDown network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" successfully" Dec 13 13:20:00.282147 containerd[1497]: time="2024-12-13T13:20:00.282110109Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" returns successfully" Dec 13 13:20:00.282217 containerd[1497]: time="2024-12-13T13:20:00.282177665Z" level=info msg="Ensure that sandbox f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223 in task-service has been cleanup successfully" Dec 13 13:20:00.282190 systemd[1]: run-netns-cni\x2d2526248b\x2d75d0\x2da7d7\x2d34a6\x2d8ef968e0987b.mount: Deactivated successfully. Dec 13 13:20:00.282629 containerd[1497]: time="2024-12-13T13:20:00.282518775Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:20:00.282665 containerd[1497]: time="2024-12-13T13:20:00.282626948Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:20:00.282665 containerd[1497]: time="2024-12-13T13:20:00.282638810Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:20:00.283175 containerd[1497]: time="2024-12-13T13:20:00.283152053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:3,}" Dec 13 13:20:00.284696 containerd[1497]: time="2024-12-13T13:20:00.284662054Z" level=info msg="TearDown network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" successfully" Dec 13 13:20:00.284696 containerd[1497]: time="2024-12-13T13:20:00.284691550Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" returns successfully" Dec 13 13:20:00.284804 systemd[1]: run-netns-cni\x2d00c0543c\x2dc83a\x2d4dfd\x2de1a3\x2de43668aad082.mount: Deactivated successfully. Dec 13 13:20:00.285622 containerd[1497]: time="2024-12-13T13:20:00.285194253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:1,}" Dec 13 13:20:00.521437 containerd[1497]: time="2024-12-13T13:20:00.521365288Z" level=error msg="Failed to destroy network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:00.521919 containerd[1497]: time="2024-12-13T13:20:00.521844867Z" level=error msg="encountered an error cleaning up failed sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:00.521919 containerd[1497]: time="2024-12-13T13:20:00.521907314Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:00.522220 kubelet[1807]: E1213 13:20:00.522187 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:00.522337 kubelet[1807]: E1213 13:20:00.522255 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:20:00.522337 kubelet[1807]: E1213 13:20:00.522286 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:20:00.522414 kubelet[1807]: E1213 13:20:00.522342 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:20:00.537399 containerd[1497]: time="2024-12-13T13:20:00.537207467Z" level=error msg="Failed to destroy network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:00.537763 containerd[1497]: time="2024-12-13T13:20:00.537734025Z" level=error msg="encountered an error cleaning up failed sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:00.537912 containerd[1497]: time="2024-12-13T13:20:00.537881952Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:00.538164 kubelet[1807]: E1213 13:20:00.538131 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:00.538239 kubelet[1807]: E1213 13:20:00.538194 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:20:00.538239 kubelet[1807]: E1213 13:20:00.538217 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:20:00.538340 kubelet[1807]: E1213 13:20:00.538290 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxqjs" podUID="7c3df8b3-1d55-49bf-8874-f308a80be555" Dec 13 13:20:00.843375 kubelet[1807]: E1213 13:20:00.842978 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:01.295745 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4-shm.mount: Deactivated successfully. Dec 13 13:20:01.298478 kubelet[1807]: I1213 13:20:01.298435 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4" Dec 13 13:20:01.299157 containerd[1497]: time="2024-12-13T13:20:01.299012614Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\"" Dec 13 13:20:01.300009 containerd[1497]: time="2024-12-13T13:20:01.299303640Z" level=info msg="Ensure that sandbox 490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4 in task-service has been cleanup successfully" Dec 13 13:20:01.300009 containerd[1497]: time="2024-12-13T13:20:01.299662032Z" level=info msg="TearDown network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" successfully" Dec 13 13:20:01.300009 containerd[1497]: time="2024-12-13T13:20:01.299681438Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" returns successfully" Dec 13 13:20:01.301089 systemd[1]: run-netns-cni\x2d7b92b7c5\x2d2004\x2db02e\x2d47fe\x2dac1bb53161d4.mount: Deactivated successfully. Dec 13 13:20:01.301740 containerd[1497]: time="2024-12-13T13:20:01.301701827Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" Dec 13 13:20:01.301830 containerd[1497]: time="2024-12-13T13:20:01.301806353Z" level=info msg="TearDown network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" successfully" Dec 13 13:20:01.301830 containerd[1497]: time="2024-12-13T13:20:01.301825679Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" returns successfully" Dec 13 13:20:01.302208 containerd[1497]: time="2024-12-13T13:20:01.302181827Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:20:01.302312 containerd[1497]: time="2024-12-13T13:20:01.302291383Z" level=info msg="TearDown network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" successfully" Dec 13 13:20:01.302342 containerd[1497]: time="2024-12-13T13:20:01.302311611Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" returns successfully" Dec 13 13:20:01.302548 kubelet[1807]: I1213 13:20:01.302525 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8" Dec 13 13:20:01.303289 containerd[1497]: time="2024-12-13T13:20:01.303252155Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\"" Dec 13 13:20:01.303597 containerd[1497]: time="2024-12-13T13:20:01.303253697Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:20:01.303597 containerd[1497]: time="2024-12-13T13:20:01.303463160Z" level=info msg="Ensure that sandbox 245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8 in task-service has been cleanup successfully" Dec 13 13:20:01.303597 containerd[1497]: time="2024-12-13T13:20:01.303554812Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:20:01.303830 containerd[1497]: time="2024-12-13T13:20:01.303788029Z" level=info msg="TearDown network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" successfully" Dec 13 13:20:01.303830 containerd[1497]: time="2024-12-13T13:20:01.303813307Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" returns successfully" Dec 13 13:20:01.304017 containerd[1497]: time="2024-12-13T13:20:01.303955604Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:20:01.304461 containerd[1497]: time="2024-12-13T13:20:01.304300661Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" Dec 13 13:20:01.304461 containerd[1497]: time="2024-12-13T13:20:01.304403624Z" level=info msg="TearDown network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" successfully" Dec 13 13:20:01.304461 containerd[1497]: time="2024-12-13T13:20:01.304419113Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" returns successfully" Dec 13 13:20:01.306092 containerd[1497]: time="2024-12-13T13:20:01.305010242Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:2,}" Dec 13 13:20:01.306092 containerd[1497]: time="2024-12-13T13:20:01.305692962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:4,}" Dec 13 13:20:01.305860 systemd[1]: run-netns-cni\x2d0d22107c\x2d4f38\x2d5dcf\x2d1df5\x2d6a83c6417db7.mount: Deactivated successfully. Dec 13 13:20:01.658993 containerd[1497]: time="2024-12-13T13:20:01.657753742Z" level=error msg="Failed to destroy network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:01.658993 containerd[1497]: time="2024-12-13T13:20:01.658893609Z" level=error msg="encountered an error cleaning up failed sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:01.658993 containerd[1497]: time="2024-12-13T13:20:01.658971105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:01.659326 kubelet[1807]: E1213 13:20:01.659283 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:01.659467 kubelet[1807]: E1213 13:20:01.659369 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:20:01.659467 kubelet[1807]: E1213 13:20:01.659392 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:20:01.659467 kubelet[1807]: E1213 13:20:01.659458 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxqjs" podUID="7c3df8b3-1d55-49bf-8874-f308a80be555" Dec 13 13:20:01.674393 containerd[1497]: time="2024-12-13T13:20:01.674342732Z" level=error msg="Failed to destroy network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:01.674899 containerd[1497]: time="2024-12-13T13:20:01.674807554Z" level=error msg="encountered an error cleaning up failed sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:01.674899 containerd[1497]: time="2024-12-13T13:20:01.674872185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:01.675180 kubelet[1807]: E1213 13:20:01.675156 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:01.675303 kubelet[1807]: E1213 13:20:01.675287 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:20:01.675379 kubelet[1807]: E1213 13:20:01.675315 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:20:01.675379 kubelet[1807]: E1213 13:20:01.675375 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:20:01.828984 kubelet[1807]: E1213 13:20:01.828918 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:01.843269 kubelet[1807]: E1213 13:20:01.843196 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:02.295329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d-shm.mount: Deactivated successfully. Dec 13 13:20:02.306625 kubelet[1807]: I1213 13:20:02.306563 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9" Dec 13 13:20:02.307458 containerd[1497]: time="2024-12-13T13:20:02.307422422Z" level=info msg="StopPodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\"" Dec 13 13:20:02.307837 containerd[1497]: time="2024-12-13T13:20:02.307668213Z" level=info msg="Ensure that sandbox aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9 in task-service has been cleanup successfully" Dec 13 13:20:02.309708 systemd[1]: run-netns-cni\x2dbf29115e\x2d50fe\x2d60c9\x2de796\x2d560052f08349.mount: Deactivated successfully. Dec 13 13:20:02.310840 containerd[1497]: time="2024-12-13T13:20:02.310479205Z" level=info msg="TearDown network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" successfully" Dec 13 13:20:02.310840 containerd[1497]: time="2024-12-13T13:20:02.310501126Z" level=info msg="StopPodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" returns successfully" Dec 13 13:20:02.310840 containerd[1497]: time="2024-12-13T13:20:02.310789056Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\"" Dec 13 13:20:02.310965 containerd[1497]: time="2024-12-13T13:20:02.310901958Z" level=info msg="TearDown network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" successfully" Dec 13 13:20:02.310965 containerd[1497]: time="2024-12-13T13:20:02.310944267Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" returns successfully" Dec 13 13:20:02.311651 kubelet[1807]: I1213 13:20:02.311619 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d" Dec 13 13:20:02.312502 containerd[1497]: time="2024-12-13T13:20:02.312469127Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" Dec 13 13:20:02.312657 containerd[1497]: time="2024-12-13T13:20:02.312637192Z" level=info msg="StopPodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\"" Dec 13 13:20:02.313074 containerd[1497]: time="2024-12-13T13:20:02.312699158Z" level=info msg="TearDown network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" successfully" Dec 13 13:20:02.313074 containerd[1497]: time="2024-12-13T13:20:02.312838379Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" returns successfully" Dec 13 13:20:02.313074 containerd[1497]: time="2024-12-13T13:20:02.312917507Z" level=info msg="Ensure that sandbox c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d in task-service has been cleanup successfully" Dec 13 13:20:02.313167 containerd[1497]: time="2024-12-13T13:20:02.313107474Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:20:02.313231 containerd[1497]: time="2024-12-13T13:20:02.313195248Z" level=info msg="TearDown network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" successfully" Dec 13 13:20:02.313231 containerd[1497]: time="2024-12-13T13:20:02.313226317Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" returns successfully" Dec 13 13:20:02.313917 containerd[1497]: time="2024-12-13T13:20:02.313890362Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:20:02.314021 containerd[1497]: time="2024-12-13T13:20:02.314001731Z" level=info msg="TearDown network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" successfully" Dec 13 13:20:02.314167 containerd[1497]: time="2024-12-13T13:20:02.314152363Z" level=info msg="StopPodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" returns successfully" Dec 13 13:20:02.314370 containerd[1497]: time="2024-12-13T13:20:02.314012701Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:20:02.314370 containerd[1497]: time="2024-12-13T13:20:02.314340296Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:20:02.315090 containerd[1497]: time="2024-12-13T13:20:02.315032133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:5,}" Dec 13 13:20:02.315090 containerd[1497]: time="2024-12-13T13:20:02.315063582Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\"" Dec 13 13:20:02.315170 containerd[1497]: time="2024-12-13T13:20:02.315150195Z" level=info msg="TearDown network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" successfully" Dec 13 13:20:02.315170 containerd[1497]: time="2024-12-13T13:20:02.315161335Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" returns successfully" Dec 13 13:20:02.315655 systemd[1]: run-netns-cni\x2d0157e582\x2d4227\x2d85d0\x2d4b8e\x2dbf0b025ae187.mount: Deactivated successfully. Dec 13 13:20:02.315836 containerd[1497]: time="2024-12-13T13:20:02.315656063Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" Dec 13 13:20:02.315836 containerd[1497]: time="2024-12-13T13:20:02.315730042Z" level=info msg="TearDown network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" successfully" Dec 13 13:20:02.315836 containerd[1497]: time="2024-12-13T13:20:02.315766891Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" returns successfully" Dec 13 13:20:02.316174 containerd[1497]: time="2024-12-13T13:20:02.316146713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:3,}" Dec 13 13:20:02.838562 containerd[1497]: time="2024-12-13T13:20:02.838488250Z" level=error msg="Failed to destroy network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:02.839747 containerd[1497]: time="2024-12-13T13:20:02.839703149Z" level=error msg="encountered an error cleaning up failed sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:02.839897 containerd[1497]: time="2024-12-13T13:20:02.839801413Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:02.840165 kubelet[1807]: E1213 13:20:02.840114 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:02.840165 kubelet[1807]: E1213 13:20:02.840183 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:20:02.840378 kubelet[1807]: E1213 13:20:02.840233 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:20:02.840378 kubelet[1807]: E1213 13:20:02.840290 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:20:02.843779 kubelet[1807]: E1213 13:20:02.843735 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:02.853955 containerd[1497]: time="2024-12-13T13:20:02.853893480Z" level=error msg="Failed to destroy network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:02.854334 containerd[1497]: time="2024-12-13T13:20:02.854297889Z" level=error msg="encountered an error cleaning up failed sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:02.854399 containerd[1497]: time="2024-12-13T13:20:02.854370235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:02.854793 kubelet[1807]: E1213 13:20:02.854651 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:02.854793 kubelet[1807]: E1213 13:20:02.854716 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:20:02.854793 kubelet[1807]: E1213 13:20:02.854747 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:20:02.854916 kubelet[1807]: E1213 13:20:02.854807 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxqjs" podUID="7c3df8b3-1d55-49bf-8874-f308a80be555" Dec 13 13:20:03.295256 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9-shm.mount: Deactivated successfully. Dec 13 13:20:03.317546 kubelet[1807]: I1213 13:20:03.317511 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a" Dec 13 13:20:03.318329 containerd[1497]: time="2024-12-13T13:20:03.318280302Z" level=info msg="StopPodSandbox for \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\"" Dec 13 13:20:03.318647 containerd[1497]: time="2024-12-13T13:20:03.318516454Z" level=info msg="Ensure that sandbox 0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a in task-service has been cleanup successfully" Dec 13 13:20:03.318996 containerd[1497]: time="2024-12-13T13:20:03.318969324Z" level=info msg="TearDown network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\" successfully" Dec 13 13:20:03.318996 containerd[1497]: time="2024-12-13T13:20:03.318992367Z" level=info msg="StopPodSandbox for \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\" returns successfully" Dec 13 13:20:03.319484 containerd[1497]: time="2024-12-13T13:20:03.319432452Z" level=info msg="StopPodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\"" Dec 13 13:20:03.319585 containerd[1497]: time="2024-12-13T13:20:03.319550263Z" level=info msg="TearDown network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" successfully" Dec 13 13:20:03.319643 containerd[1497]: time="2024-12-13T13:20:03.319586671Z" level=info msg="StopPodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" returns successfully" Dec 13 13:20:03.319970 containerd[1497]: time="2024-12-13T13:20:03.319948109Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\"" Dec 13 13:20:03.320041 containerd[1497]: time="2024-12-13T13:20:03.320024432Z" level=info msg="TearDown network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" successfully" Dec 13 13:20:03.320041 containerd[1497]: time="2024-12-13T13:20:03.320037968Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" returns successfully" Dec 13 13:20:03.320455 containerd[1497]: time="2024-12-13T13:20:03.320433439Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" Dec 13 13:20:03.320525 containerd[1497]: time="2024-12-13T13:20:03.320507488Z" level=info msg="TearDown network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" successfully" Dec 13 13:20:03.320525 containerd[1497]: time="2024-12-13T13:20:03.320521955Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" returns successfully" Dec 13 13:20:03.321237 systemd[1]: run-netns-cni\x2d8174c0f0\x2d3478\x2d38ce\x2d29be\x2de9dcbaf9df2d.mount: Deactivated successfully. Dec 13 13:20:03.322711 containerd[1497]: time="2024-12-13T13:20:03.322415356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:4,}" Dec 13 13:20:03.323403 kubelet[1807]: I1213 13:20:03.323366 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9" Dec 13 13:20:03.324171 containerd[1497]: time="2024-12-13T13:20:03.324000749Z" level=info msg="StopPodSandbox for \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\"" Dec 13 13:20:03.324171 containerd[1497]: time="2024-12-13T13:20:03.324238626Z" level=info msg="Ensure that sandbox c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9 in task-service has been cleanup successfully" Dec 13 13:20:03.324629 containerd[1497]: time="2024-12-13T13:20:03.324447147Z" level=info msg="TearDown network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\" successfully" Dec 13 13:20:03.324629 containerd[1497]: time="2024-12-13T13:20:03.324460151Z" level=info msg="StopPodSandbox for \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\" returns successfully" Dec 13 13:20:03.326311 systemd[1]: run-netns-cni\x2d454d488f\x2d4119\x2df20f\x2dade2\x2d29a6da2b3ec8.mount: Deactivated successfully. Dec 13 13:20:03.327015 containerd[1497]: time="2024-12-13T13:20:03.326741239Z" level=info msg="StopPodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\"" Dec 13 13:20:03.327015 containerd[1497]: time="2024-12-13T13:20:03.326837860Z" level=info msg="TearDown network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" successfully" Dec 13 13:20:03.327015 containerd[1497]: time="2024-12-13T13:20:03.326849853Z" level=info msg="StopPodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" returns successfully" Dec 13 13:20:03.327153 containerd[1497]: time="2024-12-13T13:20:03.327104440Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\"" Dec 13 13:20:03.327239 containerd[1497]: time="2024-12-13T13:20:03.327207934Z" level=info msg="TearDown network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" successfully" Dec 13 13:20:03.327239 containerd[1497]: time="2024-12-13T13:20:03.327234634Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" returns successfully" Dec 13 13:20:03.327601 containerd[1497]: time="2024-12-13T13:20:03.327529357Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" Dec 13 13:20:03.327694 containerd[1497]: time="2024-12-13T13:20:03.327649953Z" level=info msg="TearDown network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" successfully" Dec 13 13:20:03.327694 containerd[1497]: time="2024-12-13T13:20:03.327666344Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" returns successfully" Dec 13 13:20:03.328364 containerd[1497]: time="2024-12-13T13:20:03.327939015Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:20:03.328364 containerd[1497]: time="2024-12-13T13:20:03.328029605Z" level=info msg="TearDown network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" successfully" Dec 13 13:20:03.328364 containerd[1497]: time="2024-12-13T13:20:03.328042629Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" returns successfully" Dec 13 13:20:03.329049 containerd[1497]: time="2024-12-13T13:20:03.328831158Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:20:03.329049 containerd[1497]: time="2024-12-13T13:20:03.328918973Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:20:03.329049 containerd[1497]: time="2024-12-13T13:20:03.328932989Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:20:03.329376 containerd[1497]: time="2024-12-13T13:20:03.329352656Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:6,}" Dec 13 13:20:03.689816 containerd[1497]: time="2024-12-13T13:20:03.689673296Z" level=error msg="Failed to destroy network for sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:03.690470 containerd[1497]: time="2024-12-13T13:20:03.690434814Z" level=error msg="encountered an error cleaning up failed sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:03.691463 containerd[1497]: time="2024-12-13T13:20:03.690613609Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:03.691636 kubelet[1807]: E1213 13:20:03.691431 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:03.691636 kubelet[1807]: E1213 13:20:03.691494 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:20:03.691636 kubelet[1807]: E1213 13:20:03.691519 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7c2dz" Dec 13 13:20:03.691766 kubelet[1807]: E1213 13:20:03.691625 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7c2dz_calico-system(aa5ad557-1e48-4494-9e6c-1e1c1985b57b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7c2dz" podUID="aa5ad557-1e48-4494-9e6c-1e1c1985b57b" Dec 13 13:20:03.692311 containerd[1497]: time="2024-12-13T13:20:03.692243586Z" level=error msg="Failed to destroy network for sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:03.692850 containerd[1497]: time="2024-12-13T13:20:03.692816811Z" level=error msg="encountered an error cleaning up failed sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:03.692890 containerd[1497]: time="2024-12-13T13:20:03.692875401Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:03.693146 kubelet[1807]: E1213 13:20:03.693102 1807 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:20:03.693234 kubelet[1807]: E1213 13:20:03.693193 1807 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:20:03.693234 kubelet[1807]: E1213 13:20:03.693222 1807 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-6d5f899847-vxqjs" Dec 13 13:20:03.693331 kubelet[1807]: E1213 13:20:03.693288 1807 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-6d5f899847-vxqjs_default(7c3df8b3-1d55-49bf-8874-f308a80be555)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-6d5f899847-vxqjs" podUID="7c3df8b3-1d55-49bf-8874-f308a80be555" Dec 13 13:20:03.693905 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e-shm.mount: Deactivated successfully. Dec 13 13:20:03.844109 kubelet[1807]: E1213 13:20:03.844052 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:03.882974 containerd[1497]: time="2024-12-13T13:20:03.882892180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:03.883685 containerd[1497]: time="2024-12-13T13:20:03.883611069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:20:03.884924 containerd[1497]: time="2024-12-13T13:20:03.884879577Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:03.886915 containerd[1497]: time="2024-12-13T13:20:03.886887172Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:03.887545 containerd[1497]: time="2024-12-13T13:20:03.887501675Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.617765142s" Dec 13 13:20:03.887545 containerd[1497]: time="2024-12-13T13:20:03.887537051Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:20:03.899084 containerd[1497]: time="2024-12-13T13:20:03.899033170Z" level=info msg="CreateContainer within sandbox \"f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:20:03.918855 containerd[1497]: time="2024-12-13T13:20:03.918775975Z" level=info msg="CreateContainer within sandbox \"f25702e00ae68e8ce30917fce549c53e908340ca13d885ac00b2e071033766d0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"bbde766895b389d98a1110b62c2d95e07a3408b9b269af0e5ecd0bb166671f61\"" Dec 13 13:20:03.919673 containerd[1497]: time="2024-12-13T13:20:03.919620058Z" level=info msg="StartContainer for \"bbde766895b389d98a1110b62c2d95e07a3408b9b269af0e5ecd0bb166671f61\"" Dec 13 13:20:04.031771 systemd[1]: Started cri-containerd-bbde766895b389d98a1110b62c2d95e07a3408b9b269af0e5ecd0bb166671f61.scope - libcontainer container bbde766895b389d98a1110b62c2d95e07a3408b9b269af0e5ecd0bb166671f61. Dec 13 13:20:04.097382 containerd[1497]: time="2024-12-13T13:20:04.097336740Z" level=info msg="StartContainer for \"bbde766895b389d98a1110b62c2d95e07a3408b9b269af0e5ecd0bb166671f61\" returns successfully" Dec 13 13:20:04.175700 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:20:04.175859 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:20:04.297133 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05-shm.mount: Deactivated successfully. Dec 13 13:20:04.297262 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4214374574.mount: Deactivated successfully. Dec 13 13:20:04.327562 kubelet[1807]: I1213 13:20:04.327516 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05" Dec 13 13:20:04.328739 containerd[1497]: time="2024-12-13T13:20:04.328111646Z" level=info msg="StopPodSandbox for \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\"" Dec 13 13:20:04.328739 containerd[1497]: time="2024-12-13T13:20:04.328329314Z" level=info msg="Ensure that sandbox 06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05 in task-service has been cleanup successfully" Dec 13 13:20:04.328739 containerd[1497]: time="2024-12-13T13:20:04.328730226Z" level=info msg="TearDown network for sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\" successfully" Dec 13 13:20:04.329078 containerd[1497]: time="2024-12-13T13:20:04.328745635Z" level=info msg="StopPodSandbox for \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\" returns successfully" Dec 13 13:20:04.329078 containerd[1497]: time="2024-12-13T13:20:04.328976387Z" level=info msg="StopPodSandbox for \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\"" Dec 13 13:20:04.329172 containerd[1497]: time="2024-12-13T13:20:04.329077116Z" level=info msg="TearDown network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\" successfully" Dec 13 13:20:04.329172 containerd[1497]: time="2024-12-13T13:20:04.329090021Z" level=info msg="StopPodSandbox for \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\" returns successfully" Dec 13 13:20:04.329904 containerd[1497]: time="2024-12-13T13:20:04.329518244Z" level=info msg="StopPodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\"" Dec 13 13:20:04.329904 containerd[1497]: time="2024-12-13T13:20:04.329626176Z" level=info msg="TearDown network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" successfully" Dec 13 13:20:04.329904 containerd[1497]: time="2024-12-13T13:20:04.329643629Z" level=info msg="StopPodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" returns successfully" Dec 13 13:20:04.330272 containerd[1497]: time="2024-12-13T13:20:04.330243674Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\"" Dec 13 13:20:04.330379 containerd[1497]: time="2024-12-13T13:20:04.330323935Z" level=info msg="TearDown network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" successfully" Dec 13 13:20:04.330379 containerd[1497]: time="2024-12-13T13:20:04.330334034Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" returns successfully" Dec 13 13:20:04.330764 systemd[1]: run-netns-cni\x2dc6f17a10\x2d09af\x2d6faf\x2d6d02\x2dc7f1cf0dd2a6.mount: Deactivated successfully. Dec 13 13:20:04.331628 kubelet[1807]: E1213 13:20:04.331597 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:04.332220 containerd[1497]: time="2024-12-13T13:20:04.332086891Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" Dec 13 13:20:04.332220 containerd[1497]: time="2024-12-13T13:20:04.332186177Z" level=info msg="TearDown network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" successfully" Dec 13 13:20:04.332220 containerd[1497]: time="2024-12-13T13:20:04.332196306Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" returns successfully" Dec 13 13:20:04.333121 containerd[1497]: time="2024-12-13T13:20:04.332695452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:5,}" Dec 13 13:20:04.335649 kubelet[1807]: I1213 13:20:04.335617 1807 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e" Dec 13 13:20:04.342379 containerd[1497]: time="2024-12-13T13:20:04.342332144Z" level=info msg="StopPodSandbox for \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\"" Dec 13 13:20:04.342583 containerd[1497]: time="2024-12-13T13:20:04.342547418Z" level=info msg="Ensure that sandbox 01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e in task-service has been cleanup successfully" Dec 13 13:20:04.342836 containerd[1497]: time="2024-12-13T13:20:04.342797778Z" level=info msg="TearDown network for sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\" successfully" Dec 13 13:20:04.342836 containerd[1497]: time="2024-12-13T13:20:04.342815411Z" level=info msg="StopPodSandbox for \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\" returns successfully" Dec 13 13:20:04.343338 containerd[1497]: time="2024-12-13T13:20:04.343283068Z" level=info msg="StopPodSandbox for \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\"" Dec 13 13:20:04.343419 containerd[1497]: time="2024-12-13T13:20:04.343371373Z" level=info msg="TearDown network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\" successfully" Dec 13 13:20:04.343419 containerd[1497]: time="2024-12-13T13:20:04.343384808Z" level=info msg="StopPodSandbox for \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\" returns successfully" Dec 13 13:20:04.344312 containerd[1497]: time="2024-12-13T13:20:04.344154392Z" level=info msg="StopPodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\"" Dec 13 13:20:04.344339 containerd[1497]: time="2024-12-13T13:20:04.344309733Z" level=info msg="TearDown network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" successfully" Dec 13 13:20:04.344339 containerd[1497]: time="2024-12-13T13:20:04.344323369Z" level=info msg="StopPodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" returns successfully" Dec 13 13:20:04.345047 kubelet[1807]: I1213 13:20:04.344786 1807 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/calico-node-gldxz" podStartSLOduration=3.564304088 podStartE2EDuration="22.344696959s" podCreationTimestamp="2024-12-13 13:19:42 +0000 UTC" firstStartedPulling="2024-12-13 13:19:45.10747989 +0000 UTC m=+3.594890043" lastFinishedPulling="2024-12-13 13:20:03.887872761 +0000 UTC m=+22.375282914" observedRunningTime="2024-12-13 13:20:04.343824062 +0000 UTC m=+22.831234215" watchObservedRunningTime="2024-12-13 13:20:04.344696959 +0000 UTC m=+22.832107112" Dec 13 13:20:04.346191 containerd[1497]: time="2024-12-13T13:20:04.345103381Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\"" Dec 13 13:20:04.346260 containerd[1497]: time="2024-12-13T13:20:04.346248920Z" level=info msg="TearDown network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" successfully" Dec 13 13:20:04.346289 containerd[1497]: time="2024-12-13T13:20:04.346260612Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" returns successfully" Dec 13 13:20:04.346310 systemd[1]: run-netns-cni\x2d5669fe5b\x2df369\x2d08dc\x2d51c8\x2d964ba8df6d85.mount: Deactivated successfully. Dec 13 13:20:04.346619 containerd[1497]: time="2024-12-13T13:20:04.346593606Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" Dec 13 13:20:04.346709 containerd[1497]: time="2024-12-13T13:20:04.346673867Z" level=info msg="TearDown network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" successfully" Dec 13 13:20:04.346709 containerd[1497]: time="2024-12-13T13:20:04.346685188Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" returns successfully" Dec 13 13:20:04.347551 containerd[1497]: time="2024-12-13T13:20:04.347508712Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:20:04.347670 containerd[1497]: time="2024-12-13T13:20:04.347646891Z" level=info msg="TearDown network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" successfully" Dec 13 13:20:04.347695 containerd[1497]: time="2024-12-13T13:20:04.347667871Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" returns successfully" Dec 13 13:20:04.348005 containerd[1497]: time="2024-12-13T13:20:04.347970478Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:20:04.348132 containerd[1497]: time="2024-12-13T13:20:04.348101514Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:20:04.348132 containerd[1497]: time="2024-12-13T13:20:04.348126020Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:20:04.348743 containerd[1497]: time="2024-12-13T13:20:04.348699385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:7,}" Dec 13 13:20:04.488361 systemd-networkd[1427]: calida4c375a85b: Link UP Dec 13 13:20:04.489784 systemd-networkd[1427]: calida4c375a85b: Gained carrier Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.387 [INFO][2870] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.401 [INFO][2870] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0 nginx-deployment-6d5f899847- default 7c3df8b3-1d55-49bf-8874-f308a80be555 1043 0 2024-12-13 13:19:59 +0000 UTC map[app:nginx pod-template-hash:6d5f899847 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.30 nginx-deployment-6d5f899847-vxqjs eth0 default [] [] [kns.default ksa.default.default] calida4c375a85b [] []}} ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Namespace="default" Pod="nginx-deployment-6d5f899847-vxqjs" WorkloadEndpoint="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.402 [INFO][2870] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Namespace="default" Pod="nginx-deployment-6d5f899847-vxqjs" WorkloadEndpoint="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.440 [INFO][2905] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" HandleID="k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Workload="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.448 [INFO][2905] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" HandleID="k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Workload="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002dd090), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.30", "pod":"nginx-deployment-6d5f899847-vxqjs", "timestamp":"2024-12-13 13:20:04.440563704 +0000 UTC"}, Hostname:"10.0.0.30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.448 [INFO][2905] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.449 [INFO][2905] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.449 [INFO][2905] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.30' Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.450 [INFO][2905] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.454 [INFO][2905] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.459 [INFO][2905] ipam/ipam.go 489: Trying affinity for 192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.461 [INFO][2905] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.463 [INFO][2905] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.463 [INFO][2905] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.0/26 handle="k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.464 [INFO][2905] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998 Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.468 [INFO][2905] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.0/26 handle="k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.473 [INFO][2905] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.1/26] block=192.168.125.0/26 handle="k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.473 [INFO][2905] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.1/26] handle="k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" host="10.0.0.30" Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.473 [INFO][2905] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:20:04.500417 containerd[1497]: 2024-12-13 13:20:04.473 [INFO][2905] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.1/26] IPv6=[] ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" HandleID="k8s-pod-network.f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Workload="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" Dec 13 13:20:04.501285 containerd[1497]: 2024-12-13 13:20:04.478 [INFO][2870] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Namespace="default" Pod="nginx-deployment-6d5f899847-vxqjs" WorkloadEndpoint="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"7c3df8b3-1d55-49bf-8874-f308a80be555", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.30", ContainerID:"", Pod:"nginx-deployment-6d5f899847-vxqjs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calida4c375a85b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:20:04.501285 containerd[1497]: 2024-12-13 13:20:04.478 [INFO][2870] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.1/32] ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Namespace="default" Pod="nginx-deployment-6d5f899847-vxqjs" WorkloadEndpoint="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" Dec 13 13:20:04.501285 containerd[1497]: 2024-12-13 13:20:04.478 [INFO][2870] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calida4c375a85b ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Namespace="default" Pod="nginx-deployment-6d5f899847-vxqjs" WorkloadEndpoint="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" Dec 13 13:20:04.501285 containerd[1497]: 2024-12-13 13:20:04.490 [INFO][2870] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Namespace="default" Pod="nginx-deployment-6d5f899847-vxqjs" WorkloadEndpoint="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" Dec 13 13:20:04.501285 containerd[1497]: 2024-12-13 13:20:04.491 [INFO][2870] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Namespace="default" Pod="nginx-deployment-6d5f899847-vxqjs" WorkloadEndpoint="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0", GenerateName:"nginx-deployment-6d5f899847-", Namespace:"default", SelfLink:"", UID:"7c3df8b3-1d55-49bf-8874-f308a80be555", ResourceVersion:"1043", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 19, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"6d5f899847", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.30", ContainerID:"f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998", Pod:"nginx-deployment-6d5f899847-vxqjs", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calida4c375a85b", MAC:"1e:9f:bd:ce:53:3e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:20:04.501285 containerd[1497]: 2024-12-13 13:20:04.497 [INFO][2870] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998" Namespace="default" Pod="nginx-deployment-6d5f899847-vxqjs" WorkloadEndpoint="10.0.0.30-k8s-nginx--deployment--6d5f899847--vxqjs-eth0" Dec 13 13:20:04.527359 systemd-networkd[1427]: calicbd0fa88a54: Link UP Dec 13 13:20:04.527603 systemd-networkd[1427]: calicbd0fa88a54: Gained carrier Dec 13 13:20:04.535461 containerd[1497]: time="2024-12-13T13:20:04.535294678Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:20:04.535461 containerd[1497]: time="2024-12-13T13:20:04.535392652Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:20:04.535461 containerd[1497]: time="2024-12-13T13:20:04.535409403Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:04.535822 containerd[1497]: time="2024-12-13T13:20:04.535527404Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.435 [INFO][2884] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.448 [INFO][2884] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.30-k8s-csi--node--driver--7c2dz-eth0 csi-node-driver- calico-system aa5ad557-1e48-4494-9e6c-1e1c1985b57b 762 0 2024-12-13 13:19:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:55b695c467 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.0.0.30 csi-node-driver-7c2dz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calicbd0fa88a54 [] []}} ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Namespace="calico-system" Pod="csi-node-driver-7c2dz" WorkloadEndpoint="10.0.0.30-k8s-csi--node--driver--7c2dz-" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.448 [INFO][2884] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Namespace="calico-system" Pod="csi-node-driver-7c2dz" WorkloadEndpoint="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.480 [INFO][2916] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" HandleID="k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Workload="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.488 [INFO][2916] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" HandleID="k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Workload="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003aa4c0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.0.0.30", "pod":"csi-node-driver-7c2dz", "timestamp":"2024-12-13 13:20:04.480131457 +0000 UTC"}, Hostname:"10.0.0.30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.489 [INFO][2916] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.489 [INFO][2916] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.489 [INFO][2916] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.30' Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.490 [INFO][2916] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.495 [INFO][2916] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.502 [INFO][2916] ipam/ipam.go 489: Trying affinity for 192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.504 [INFO][2916] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.506 [INFO][2916] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.506 [INFO][2916] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.0/26 handle="k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.508 [INFO][2916] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.512 [INFO][2916] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.0/26 handle="k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.517 [INFO][2916] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.2/26] block=192.168.125.0/26 handle="k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.517 [INFO][2916] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.2/26] handle="k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" host="10.0.0.30" Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.517 [INFO][2916] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:20:04.540910 containerd[1497]: 2024-12-13 13:20:04.517 [INFO][2916] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.2/26] IPv6=[] ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" HandleID="k8s-pod-network.eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Workload="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" Dec 13 13:20:04.541563 containerd[1497]: 2024-12-13 13:20:04.520 [INFO][2884] cni-plugin/k8s.go 386: Populated endpoint ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Namespace="calico-system" Pod="csi-node-driver-7c2dz" WorkloadEndpoint="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30-k8s-csi--node--driver--7c2dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa5ad557-1e48-4494-9e6c-1e1c1985b57b", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 19, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.30", ContainerID:"", Pod:"csi-node-driver-7c2dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbd0fa88a54", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:20:04.541563 containerd[1497]: 2024-12-13 13:20:04.521 [INFO][2884] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.2/32] ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Namespace="calico-system" Pod="csi-node-driver-7c2dz" WorkloadEndpoint="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" Dec 13 13:20:04.541563 containerd[1497]: 2024-12-13 13:20:04.521 [INFO][2884] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicbd0fa88a54 ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Namespace="calico-system" Pod="csi-node-driver-7c2dz" WorkloadEndpoint="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" Dec 13 13:20:04.541563 containerd[1497]: 2024-12-13 13:20:04.525 [INFO][2884] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Namespace="calico-system" Pod="csi-node-driver-7c2dz" WorkloadEndpoint="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" Dec 13 13:20:04.541563 containerd[1497]: 2024-12-13 13:20:04.525 [INFO][2884] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Namespace="calico-system" Pod="csi-node-driver-7c2dz" WorkloadEndpoint="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30-k8s-csi--node--driver--7c2dz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"aa5ad557-1e48-4494-9e6c-1e1c1985b57b", ResourceVersion:"762", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 19, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"55b695c467", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.30", ContainerID:"eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a", Pod:"csi-node-driver-7c2dz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calicbd0fa88a54", MAC:"52:96:12:55:3c:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:20:04.541563 containerd[1497]: 2024-12-13 13:20:04.537 [INFO][2884] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a" Namespace="calico-system" Pod="csi-node-driver-7c2dz" WorkloadEndpoint="10.0.0.30-k8s-csi--node--driver--7c2dz-eth0" Dec 13 13:20:04.559867 systemd[1]: Started cri-containerd-f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998.scope - libcontainer container f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998. Dec 13 13:20:04.568725 containerd[1497]: time="2024-12-13T13:20:04.568422114Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:20:04.568725 containerd[1497]: time="2024-12-13T13:20:04.568489360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:20:04.568725 containerd[1497]: time="2024-12-13T13:20:04.568504298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:04.568954 containerd[1497]: time="2024-12-13T13:20:04.568756581Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:04.577181 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:20:04.592752 systemd[1]: Started cri-containerd-eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a.scope - libcontainer container eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a. Dec 13 13:20:04.607119 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:20:04.609389 containerd[1497]: time="2024-12-13T13:20:04.609344768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-vxqjs,Uid:7c3df8b3-1d55-49bf-8874-f308a80be555,Namespace:default,Attempt:5,} returns sandbox id \"f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998\"" Dec 13 13:20:04.614066 containerd[1497]: time="2024-12-13T13:20:04.614042939Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:20:04.621513 containerd[1497]: time="2024-12-13T13:20:04.621478704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7c2dz,Uid:aa5ad557-1e48-4494-9e6c-1e1c1985b57b,Namespace:calico-system,Attempt:7,} returns sandbox id \"eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a\"" Dec 13 13:20:04.844838 kubelet[1807]: E1213 13:20:04.844706 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:05.777875 kernel: bpftool[3153]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:20:05.845667 kubelet[1807]: E1213 13:20:05.845602 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:06.047428 systemd-networkd[1427]: vxlan.calico: Link UP Dec 13 13:20:06.047441 systemd-networkd[1427]: vxlan.calico: Gained carrier Dec 13 13:20:06.170216 systemd-networkd[1427]: calicbd0fa88a54: Gained IPv6LL Dec 13 13:20:06.426722 systemd-networkd[1427]: calida4c375a85b: Gained IPv6LL Dec 13 13:20:06.845820 kubelet[1807]: E1213 13:20:06.845766 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:07.513748 systemd-networkd[1427]: vxlan.calico: Gained IPv6LL Dec 13 13:20:07.846301 kubelet[1807]: E1213 13:20:07.846140 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:08.745251 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3400424264.mount: Deactivated successfully. Dec 13 13:20:08.752588 kubelet[1807]: I1213 13:20:08.752521 1807 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Dec 13 13:20:08.753539 kubelet[1807]: E1213 13:20:08.753452 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:08.847110 kubelet[1807]: E1213 13:20:08.847051 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:09.847838 kubelet[1807]: E1213 13:20:09.847769 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:09.931987 containerd[1497]: time="2024-12-13T13:20:09.931911088Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:09.932752 containerd[1497]: time="2024-12-13T13:20:09.932680591Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 13:20:09.934344 containerd[1497]: time="2024-12-13T13:20:09.934303027Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:09.940029 containerd[1497]: time="2024-12-13T13:20:09.939982320Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:09.941079 containerd[1497]: time="2024-12-13T13:20:09.941039835Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 5.326965726s" Dec 13 13:20:09.941126 containerd[1497]: time="2024-12-13T13:20:09.941080923Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:20:09.942104 containerd[1497]: time="2024-12-13T13:20:09.941841138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:20:09.942912 containerd[1497]: time="2024-12-13T13:20:09.942882683Z" level=info msg="CreateContainer within sandbox \"f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 13:20:09.959451 containerd[1497]: time="2024-12-13T13:20:09.959379775Z" level=info msg="CreateContainer within sandbox \"f298b7dc02c73f9f84ac058802a94c0565d791609ffd3c655626ec06c6474998\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"b54c99cedc2b1784c8a9cbed080e28db9a6d31d79a33176fe7a7e66415f2f614\"" Dec 13 13:20:09.960033 containerd[1497]: time="2024-12-13T13:20:09.959926763Z" level=info msg="StartContainer for \"b54c99cedc2b1784c8a9cbed080e28db9a6d31d79a33176fe7a7e66415f2f614\"" Dec 13 13:20:10.018058 systemd[1]: run-containerd-runc-k8s.io-b54c99cedc2b1784c8a9cbed080e28db9a6d31d79a33176fe7a7e66415f2f614-runc.RgK1yo.mount: Deactivated successfully. Dec 13 13:20:10.033754 systemd[1]: Started cri-containerd-b54c99cedc2b1784c8a9cbed080e28db9a6d31d79a33176fe7a7e66415f2f614.scope - libcontainer container b54c99cedc2b1784c8a9cbed080e28db9a6d31d79a33176fe7a7e66415f2f614. Dec 13 13:20:10.200129 containerd[1497]: time="2024-12-13T13:20:10.199982789Z" level=info msg="StartContainer for \"b54c99cedc2b1784c8a9cbed080e28db9a6d31d79a33176fe7a7e66415f2f614\" returns successfully" Dec 13 13:20:10.848642 kubelet[1807]: E1213 13:20:10.848551 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:11.714655 containerd[1497]: time="2024-12-13T13:20:11.714546869Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:11.715382 containerd[1497]: time="2024-12-13T13:20:11.715325836Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:20:11.716660 containerd[1497]: time="2024-12-13T13:20:11.716618565Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:11.719861 containerd[1497]: time="2024-12-13T13:20:11.719801424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:11.720500 containerd[1497]: time="2024-12-13T13:20:11.720456885Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.77857549s" Dec 13 13:20:11.720500 containerd[1497]: time="2024-12-13T13:20:11.720499547Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:20:11.722434 containerd[1497]: time="2024-12-13T13:20:11.722392402Z" level=info msg="CreateContainer within sandbox \"eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:20:11.744104 containerd[1497]: time="2024-12-13T13:20:11.744042588Z" level=info msg="CreateContainer within sandbox \"eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"ed636f6fee73324a4e0dc51a24aae3e3d145bafc284f19e54a2ba2d497912f7f\"" Dec 13 13:20:11.744699 containerd[1497]: time="2024-12-13T13:20:11.744670316Z" level=info msg="StartContainer for \"ed636f6fee73324a4e0dc51a24aae3e3d145bafc284f19e54a2ba2d497912f7f\"" Dec 13 13:20:11.783806 systemd[1]: Started cri-containerd-ed636f6fee73324a4e0dc51a24aae3e3d145bafc284f19e54a2ba2d497912f7f.scope - libcontainer container ed636f6fee73324a4e0dc51a24aae3e3d145bafc284f19e54a2ba2d497912f7f. Dec 13 13:20:11.848886 kubelet[1807]: E1213 13:20:11.848815 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:11.853225 containerd[1497]: time="2024-12-13T13:20:11.853152068Z" level=info msg="StartContainer for \"ed636f6fee73324a4e0dc51a24aae3e3d145bafc284f19e54a2ba2d497912f7f\" returns successfully" Dec 13 13:20:11.854772 containerd[1497]: time="2024-12-13T13:20:11.854726064Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:20:12.849065 kubelet[1807]: E1213 13:20:12.848970 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:13.551081 containerd[1497]: time="2024-12-13T13:20:13.551022329Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:13.551878 containerd[1497]: time="2024-12-13T13:20:13.551838794Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:20:13.552995 containerd[1497]: time="2024-12-13T13:20:13.552951504Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:13.555008 containerd[1497]: time="2024-12-13T13:20:13.554968328Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:13.555525 containerd[1497]: time="2024-12-13T13:20:13.555493949Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.700728731s" Dec 13 13:20:13.555567 containerd[1497]: time="2024-12-13T13:20:13.555523816Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:20:13.557150 containerd[1497]: time="2024-12-13T13:20:13.557125930Z" level=info msg="CreateContainer within sandbox \"eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:20:13.572779 containerd[1497]: time="2024-12-13T13:20:13.572728601Z" level=info msg="CreateContainer within sandbox \"eeedb75208ad085ebd9a0182a70cac276c8efe476416cd73a70c65bec0b7811a\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"1a5b95406ec60d7f2f872d0d75847d76e88efa32a459ab23f079b55d6b124093\"" Dec 13 13:20:13.573274 containerd[1497]: time="2024-12-13T13:20:13.573236609Z" level=info msg="StartContainer for \"1a5b95406ec60d7f2f872d0d75847d76e88efa32a459ab23f079b55d6b124093\"" Dec 13 13:20:13.606770 systemd[1]: Started cri-containerd-1a5b95406ec60d7f2f872d0d75847d76e88efa32a459ab23f079b55d6b124093.scope - libcontainer container 1a5b95406ec60d7f2f872d0d75847d76e88efa32a459ab23f079b55d6b124093. Dec 13 13:20:13.641268 containerd[1497]: time="2024-12-13T13:20:13.641201542Z" level=info msg="StartContainer for \"1a5b95406ec60d7f2f872d0d75847d76e88efa32a459ab23f079b55d6b124093\" returns successfully" Dec 13 13:20:13.849908 kubelet[1807]: E1213 13:20:13.849719 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:14.256663 kubelet[1807]: I1213 13:20:14.256627 1807 csi_plugin.go:99] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:20:14.256801 kubelet[1807]: I1213 13:20:14.256679 1807 csi_plugin.go:112] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:20:14.382357 kubelet[1807]: I1213 13:20:14.382317 1807 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-vxqjs" podStartSLOduration=10.054412347 podStartE2EDuration="15.382277088s" podCreationTimestamp="2024-12-13 13:19:59 +0000 UTC" firstStartedPulling="2024-12-13 13:20:04.613614405 +0000 UTC m=+23.101024558" lastFinishedPulling="2024-12-13 13:20:09.941479145 +0000 UTC m=+28.428889299" observedRunningTime="2024-12-13 13:20:10.365560769 +0000 UTC m=+28.852970932" watchObservedRunningTime="2024-12-13 13:20:14.382277088 +0000 UTC m=+32.869687241" Dec 13 13:20:14.382623 kubelet[1807]: I1213 13:20:14.382456 1807 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="calico-system/csi-node-driver-7c2dz" podStartSLOduration=23.449241405 podStartE2EDuration="32.382438846s" podCreationTimestamp="2024-12-13 13:19:42 +0000 UTC" firstStartedPulling="2024-12-13 13:20:04.62260193 +0000 UTC m=+23.110012083" lastFinishedPulling="2024-12-13 13:20:13.555799371 +0000 UTC m=+32.043209524" observedRunningTime="2024-12-13 13:20:14.382434217 +0000 UTC m=+32.869844370" watchObservedRunningTime="2024-12-13 13:20:14.382438846 +0000 UTC m=+32.869848999" Dec 13 13:20:14.850319 kubelet[1807]: E1213 13:20:14.850272 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:15.851297 kubelet[1807]: E1213 13:20:15.851207 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:16.851647 kubelet[1807]: E1213 13:20:16.851554 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:17.528006 kubelet[1807]: I1213 13:20:17.527958 1807 topology_manager.go:215] "Topology Admit Handler" podUID="0679dda0-2ec9-478c-baac-17e3f98a3d74" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 13:20:17.534821 systemd[1]: Created slice kubepods-besteffort-pod0679dda0_2ec9_478c_baac_17e3f98a3d74.slice - libcontainer container kubepods-besteffort-pod0679dda0_2ec9_478c_baac_17e3f98a3d74.slice. Dec 13 13:20:17.701332 kubelet[1807]: I1213 13:20:17.701267 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/0679dda0-2ec9-478c-baac-17e3f98a3d74-data\") pod \"nfs-server-provisioner-0\" (UID: \"0679dda0-2ec9-478c-baac-17e3f98a3d74\") " pod="default/nfs-server-provisioner-0" Dec 13 13:20:17.701332 kubelet[1807]: I1213 13:20:17.701359 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2gds\" (UniqueName: \"kubernetes.io/projected/0679dda0-2ec9-478c-baac-17e3f98a3d74-kube-api-access-g2gds\") pod \"nfs-server-provisioner-0\" (UID: \"0679dda0-2ec9-478c-baac-17e3f98a3d74\") " pod="default/nfs-server-provisioner-0" Dec 13 13:20:17.839051 containerd[1497]: time="2024-12-13T13:20:17.838871125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0679dda0-2ec9-478c-baac-17e3f98a3d74,Namespace:default,Attempt:0,}" Dec 13 13:20:17.852192 kubelet[1807]: E1213 13:20:17.852113 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:17.951460 systemd-networkd[1427]: cali60e51b789ff: Link UP Dec 13 13:20:17.952000 systemd-networkd[1427]: cali60e51b789ff: Gained carrier Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.886 [INFO][3463] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.30-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 0679dda0-2ec9-478c-baac-17e3f98a3d74 1199 0 2024-12-13 13:20:17 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.0.0.30 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.30-k8s-nfs--server--provisioner--0-" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.886 [INFO][3463] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.914 [INFO][3476] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" HandleID="k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Workload="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.921 [INFO][3476] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" HandleID="k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Workload="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030b8e0), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.30", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 13:20:17.914530108 +0000 UTC"}, Hostname:"10.0.0.30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.921 [INFO][3476] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.921 [INFO][3476] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.921 [INFO][3476] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.30' Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.923 [INFO][3476] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.927 [INFO][3476] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.933 [INFO][3476] ipam/ipam.go 489: Trying affinity for 192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.935 [INFO][3476] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.937 [INFO][3476] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.937 [INFO][3476] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.0/26 handle="k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.939 [INFO][3476] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756 Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.942 [INFO][3476] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.0/26 handle="k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.946 [INFO][3476] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.3/26] block=192.168.125.0/26 handle="k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.946 [INFO][3476] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.3/26] handle="k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" host="10.0.0.30" Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.946 [INFO][3476] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:20:17.963192 containerd[1497]: 2024-12-13 13:20:17.946 [INFO][3476] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.3/26] IPv6=[] ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" HandleID="k8s-pod-network.f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Workload="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:20:17.964149 containerd[1497]: 2024-12-13 13:20:17.948 [INFO][3463] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0679dda0-2ec9-478c-baac-17e3f98a3d74", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 20, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.30", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.125.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:20:17.964149 containerd[1497]: 2024-12-13 13:20:17.949 [INFO][3463] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.3/32] ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:20:17.964149 containerd[1497]: 2024-12-13 13:20:17.949 [INFO][3463] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:20:17.964149 containerd[1497]: 2024-12-13 13:20:17.952 [INFO][3463] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:20:17.964320 containerd[1497]: 2024-12-13 13:20:17.952 [INFO][3463] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"0679dda0-2ec9-478c-baac-17e3f98a3d74", ResourceVersion:"1199", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 20, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.30", ContainerID:"f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.125.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ee:f0:be:9d:47:0f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:20:17.964320 containerd[1497]: 2024-12-13 13:20:17.960 [INFO][3463] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.0.0.30-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:20:18.080947 update_engine[1484]: I20241213 13:20:18.080855 1484 update_attempter.cc:509] Updating boot flags... Dec 13 13:20:18.099186 containerd[1497]: time="2024-12-13T13:20:18.098992303Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:20:18.099186 containerd[1497]: time="2024-12-13T13:20:18.099064280Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:20:18.100180 containerd[1497]: time="2024-12-13T13:20:18.099076864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:18.100306 containerd[1497]: time="2024-12-13T13:20:18.100135483Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:18.125978 systemd[1]: Started cri-containerd-f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756.scope - libcontainer container f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756. Dec 13 13:20:18.131629 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3529) Dec 13 13:20:18.161558 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:20:18.177610 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3529) Dec 13 13:20:18.201004 containerd[1497]: time="2024-12-13T13:20:18.200876213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:0679dda0-2ec9-478c-baac-17e3f98a3d74,Namespace:default,Attempt:0,} returns sandbox id \"f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756\"" Dec 13 13:20:18.205251 containerd[1497]: time="2024-12-13T13:20:18.204999996Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 13:20:18.219611 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (3529) Dec 13 13:20:18.852487 kubelet[1807]: E1213 13:20:18.852438 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:19.853528 kubelet[1807]: E1213 13:20:19.853299 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:19.865780 systemd-networkd[1427]: cali60e51b789ff: Gained IPv6LL Dec 13 13:20:20.380040 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1366162449.mount: Deactivated successfully. Dec 13 13:20:20.854369 kubelet[1807]: E1213 13:20:20.854310 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:21.829437 kubelet[1807]: E1213 13:20:21.829344 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:21.854874 kubelet[1807]: E1213 13:20:21.854803 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:22.855217 kubelet[1807]: E1213 13:20:22.855145 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:23.048705 containerd[1497]: time="2024-12-13T13:20:23.048625572Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:23.056760 containerd[1497]: time="2024-12-13T13:20:23.056705824Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Dec 13 13:20:23.058210 containerd[1497]: time="2024-12-13T13:20:23.058158661Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:23.061451 containerd[1497]: time="2024-12-13T13:20:23.061405040Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:23.062369 containerd[1497]: time="2024-12-13T13:20:23.062327706Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.857292162s" Dec 13 13:20:23.062439 containerd[1497]: time="2024-12-13T13:20:23.062366318Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 13:20:23.064222 containerd[1497]: time="2024-12-13T13:20:23.064181872Z" level=info msg="CreateContainer within sandbox \"f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 13:20:23.076793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3960916940.mount: Deactivated successfully. Dec 13 13:20:23.080751 containerd[1497]: time="2024-12-13T13:20:23.080689261Z" level=info msg="CreateContainer within sandbox \"f3718857e337073c4cc08c2d7987f2bb61f02dc2fadf11acda7d0274bf5fc756\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"6d15e13c60ebf1b131fe43a28ea95dba75674174af429874cd356de79928b5f4\"" Dec 13 13:20:23.081519 containerd[1497]: time="2024-12-13T13:20:23.081469918Z" level=info msg="StartContainer for \"6d15e13c60ebf1b131fe43a28ea95dba75674174af429874cd356de79928b5f4\"" Dec 13 13:20:23.116759 systemd[1]: Started cri-containerd-6d15e13c60ebf1b131fe43a28ea95dba75674174af429874cd356de79928b5f4.scope - libcontainer container 6d15e13c60ebf1b131fe43a28ea95dba75674174af429874cd356de79928b5f4. Dec 13 13:20:23.185212 containerd[1497]: time="2024-12-13T13:20:23.185157021Z" level=info msg="StartContainer for \"6d15e13c60ebf1b131fe43a28ea95dba75674174af429874cd356de79928b5f4\" returns successfully" Dec 13 13:20:23.515361 kubelet[1807]: I1213 13:20:23.515313 1807 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.657425776 podStartE2EDuration="6.515267278s" podCreationTimestamp="2024-12-13 13:20:17 +0000 UTC" firstStartedPulling="2024-12-13 13:20:18.204815136 +0000 UTC m=+36.692225289" lastFinishedPulling="2024-12-13 13:20:23.062656638 +0000 UTC m=+41.550066791" observedRunningTime="2024-12-13 13:20:23.515048084 +0000 UTC m=+42.002458247" watchObservedRunningTime="2024-12-13 13:20:23.515267278 +0000 UTC m=+42.002677431" Dec 13 13:20:23.855968 kubelet[1807]: E1213 13:20:23.855786 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:24.856139 kubelet[1807]: E1213 13:20:24.856070 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:25.859029 kubelet[1807]: E1213 13:20:25.857143 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:26.861661 kubelet[1807]: E1213 13:20:26.858612 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:27.859605 kubelet[1807]: E1213 13:20:27.859377 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:28.859769 kubelet[1807]: E1213 13:20:28.859671 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:29.860963 kubelet[1807]: E1213 13:20:29.860829 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:30.862661 kubelet[1807]: E1213 13:20:30.862482 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:31.864748 kubelet[1807]: E1213 13:20:31.863510 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:32.865181 kubelet[1807]: E1213 13:20:32.865100 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:33.140412 kubelet[1807]: I1213 13:20:33.140203 1807 topology_manager.go:215] "Topology Admit Handler" podUID="1a906390-c18d-4803-a2f8-8b40fbae7225" podNamespace="default" podName="test-pod-1" Dec 13 13:20:33.165857 systemd[1]: Created slice kubepods-besteffort-pod1a906390_c18d_4803_a2f8_8b40fbae7225.slice - libcontainer container kubepods-besteffort-pod1a906390_c18d_4803_a2f8_8b40fbae7225.slice. Dec 13 13:20:33.320299 kubelet[1807]: I1213 13:20:33.319827 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rfrqx\" (UniqueName: \"kubernetes.io/projected/1a906390-c18d-4803-a2f8-8b40fbae7225-kube-api-access-rfrqx\") pod \"test-pod-1\" (UID: \"1a906390-c18d-4803-a2f8-8b40fbae7225\") " pod="default/test-pod-1" Dec 13 13:20:33.320299 kubelet[1807]: I1213 13:20:33.319914 1807 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-799919ad-b00c-4f4a-8c8b-9e86b1632a10\" (UniqueName: \"kubernetes.io/nfs/1a906390-c18d-4803-a2f8-8b40fbae7225-pvc-799919ad-b00c-4f4a-8c8b-9e86b1632a10\") pod \"test-pod-1\" (UID: \"1a906390-c18d-4803-a2f8-8b40fbae7225\") " pod="default/test-pod-1" Dec 13 13:20:33.532890 kernel: FS-Cache: Loaded Dec 13 13:20:33.694717 kernel: RPC: Registered named UNIX socket transport module. Dec 13 13:20:33.694882 kernel: RPC: Registered udp transport module. Dec 13 13:20:33.694913 kernel: RPC: Registered tcp transport module. Dec 13 13:20:33.695326 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 13:20:33.696731 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 13:20:33.867831 kubelet[1807]: E1213 13:20:33.867203 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:34.175323 kernel: NFS: Registering the id_resolver key type Dec 13 13:20:34.175512 kernel: Key type id_resolver registered Dec 13 13:20:34.175538 kernel: Key type id_legacy registered Dec 13 13:20:34.286394 nfsidmap[3679]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 13:20:34.309239 nfsidmap[3682]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'localdomain' Dec 13 13:20:34.381776 containerd[1497]: time="2024-12-13T13:20:34.381639478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1a906390-c18d-4803-a2f8-8b40fbae7225,Namespace:default,Attempt:0,}" Dec 13 13:20:34.745668 systemd-networkd[1427]: cali5ec59c6bf6e: Link UP Dec 13 13:20:34.746925 systemd-networkd[1427]: cali5ec59c6bf6e: Gained carrier Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.511 [INFO][3686] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.0.0.30-k8s-test--pod--1-eth0 default 1a906390-c18d-4803-a2f8-8b40fbae7225 1262 0 2024-12-13 13:20:17 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.0.0.30 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.30-k8s-test--pod--1-" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.512 [INFO][3686] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.30-k8s-test--pod--1-eth0" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.600 [INFO][3700] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" HandleID="k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Workload="10.0.0.30-k8s-test--pod--1-eth0" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.624 [INFO][3700] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" HandleID="k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Workload="10.0.0.30-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000390d50), Attrs:map[string]string{"namespace":"default", "node":"10.0.0.30", "pod":"test-pod-1", "timestamp":"2024-12-13 13:20:34.600968177 +0000 UTC"}, Hostname:"10.0.0.30", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.624 [INFO][3700] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.625 [INFO][3700] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.625 [INFO][3700] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.0.0.30' Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.628 [INFO][3700] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.642 [INFO][3700] ipam/ipam.go 372: Looking up existing affinities for host host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.669 [INFO][3700] ipam/ipam.go 489: Trying affinity for 192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.676 [INFO][3700] ipam/ipam.go 155: Attempting to load block cidr=192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.681 [INFO][3700] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.125.0/26 host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.681 [INFO][3700] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.125.0/26 handle="k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.693 [INFO][3700] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.703 [INFO][3700] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.125.0/26 handle="k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.733 [INFO][3700] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.125.4/26] block=192.168.125.0/26 handle="k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.733 [INFO][3700] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.125.4/26] handle="k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" host="10.0.0.30" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.733 [INFO][3700] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.733 [INFO][3700] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.125.4/26] IPv6=[] ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" HandleID="k8s-pod-network.699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Workload="10.0.0.30-k8s-test--pod--1-eth0" Dec 13 13:20:34.779179 containerd[1497]: 2024-12-13 13:20:34.739 [INFO][3686] cni-plugin/k8s.go 386: Populated endpoint ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.30-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1a906390-c18d-4803-a2f8-8b40fbae7225", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 20, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.30", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:20:34.780282 containerd[1497]: 2024-12-13 13:20:34.740 [INFO][3686] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.125.4/32] ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.30-k8s-test--pod--1-eth0" Dec 13 13:20:34.780282 containerd[1497]: 2024-12-13 13:20:34.740 [INFO][3686] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.30-k8s-test--pod--1-eth0" Dec 13 13:20:34.780282 containerd[1497]: 2024-12-13 13:20:34.746 [INFO][3686] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.30-k8s-test--pod--1-eth0" Dec 13 13:20:34.780282 containerd[1497]: 2024-12-13 13:20:34.748 [INFO][3686] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.30-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.30-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"1a906390-c18d-4803-a2f8-8b40fbae7225", ResourceVersion:"1262", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 20, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.0.0.30", ContainerID:"699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.125.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"0e:9e:1f:09:fd:03", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:20:34.780282 containerd[1497]: 2024-12-13 13:20:34.774 [INFO][3686] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.0.0.30-k8s-test--pod--1-eth0" Dec 13 13:20:34.842947 containerd[1497]: time="2024-12-13T13:20:34.839297506Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:20:34.842947 containerd[1497]: time="2024-12-13T13:20:34.839525155Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:20:34.842947 containerd[1497]: time="2024-12-13T13:20:34.839612841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:34.842947 containerd[1497]: time="2024-12-13T13:20:34.839762983Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:20:34.871621 kubelet[1807]: E1213 13:20:34.871301 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:34.935963 systemd[1]: Started cri-containerd-699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d.scope - libcontainer container 699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d. Dec 13 13:20:34.960457 systemd-resolved[1337]: Failed to determine the local hostname and LLMNR/mDNS names, ignoring: No such device or address Dec 13 13:20:35.028531 containerd[1497]: time="2024-12-13T13:20:35.028314661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:1a906390-c18d-4803-a2f8-8b40fbae7225,Namespace:default,Attempt:0,} returns sandbox id \"699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d\"" Dec 13 13:20:35.032423 containerd[1497]: time="2024-12-13T13:20:35.032365858Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:20:35.514346 containerd[1497]: time="2024-12-13T13:20:35.512541126Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:20:35.527096 containerd[1497]: time="2024-12-13T13:20:35.523274352Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 13:20:35.527542 containerd[1497]: time="2024-12-13T13:20:35.527485631Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 495.066063ms" Dec 13 13:20:35.527542 containerd[1497]: time="2024-12-13T13:20:35.527542078Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:20:35.530619 containerd[1497]: time="2024-12-13T13:20:35.530554289Z" level=info msg="CreateContainer within sandbox \"699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 13:20:35.583289 containerd[1497]: time="2024-12-13T13:20:35.583183221Z" level=info msg="CreateContainer within sandbox \"699afa8951312013afb24047705e15d3760fc6f1da044753ff91a6de64f2a31d\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"184fe387ec8704766324d5b9a3d1053fd1b3af76bc69b83c6af3cd6d37334564\"" Dec 13 13:20:35.585651 containerd[1497]: time="2024-12-13T13:20:35.585557901Z" level=info msg="StartContainer for \"184fe387ec8704766324d5b9a3d1053fd1b3af76bc69b83c6af3cd6d37334564\"" Dec 13 13:20:35.659782 systemd[1]: Started cri-containerd-184fe387ec8704766324d5b9a3d1053fd1b3af76bc69b83c6af3cd6d37334564.scope - libcontainer container 184fe387ec8704766324d5b9a3d1053fd1b3af76bc69b83c6af3cd6d37334564. Dec 13 13:20:35.738550 containerd[1497]: time="2024-12-13T13:20:35.738364145Z" level=info msg="StartContainer for \"184fe387ec8704766324d5b9a3d1053fd1b3af76bc69b83c6af3cd6d37334564\" returns successfully" Dec 13 13:20:35.872807 kubelet[1807]: E1213 13:20:35.872280 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:36.507065 systemd-networkd[1427]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 13:20:36.557533 kubelet[1807]: I1213 13:20:36.557462 1807 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=19.061443411 podStartE2EDuration="19.557409347s" podCreationTimestamp="2024-12-13 13:20:17 +0000 UTC" firstStartedPulling="2024-12-13 13:20:35.031949644 +0000 UTC m=+53.519359797" lastFinishedPulling="2024-12-13 13:20:35.52791558 +0000 UTC m=+54.015325733" observedRunningTime="2024-12-13 13:20:36.556776646 +0000 UTC m=+55.044186820" watchObservedRunningTime="2024-12-13 13:20:36.557409347 +0000 UTC m=+55.044819510" Dec 13 13:20:36.872747 kubelet[1807]: E1213 13:20:36.872528 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:37.873182 kubelet[1807]: E1213 13:20:37.872980 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:38.873974 kubelet[1807]: E1213 13:20:38.873857 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:38.950636 kubelet[1807]: E1213 13:20:38.950242 1807 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Dec 13 13:20:39.874416 kubelet[1807]: E1213 13:20:39.874322 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:40.874718 kubelet[1807]: E1213 13:20:40.874623 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:41.834902 kubelet[1807]: E1213 13:20:41.829267 1807 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:41.849870 containerd[1497]: time="2024-12-13T13:20:41.849790773Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:20:41.850426 containerd[1497]: time="2024-12-13T13:20:41.849962154Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:20:41.850426 containerd[1497]: time="2024-12-13T13:20:41.849978546Z" level=info msg="StopPodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:20:41.852129 containerd[1497]: time="2024-12-13T13:20:41.852092721Z" level=info msg="RemovePodSandbox for \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:20:41.853467 containerd[1497]: time="2024-12-13T13:20:41.852256268Z" level=info msg="Forcibly stopping sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\"" Dec 13 13:20:41.853467 containerd[1497]: time="2024-12-13T13:20:41.852371404Z" level=info msg="TearDown network for sandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" successfully" Dec 13 13:20:41.864924 containerd[1497]: time="2024-12-13T13:20:41.864731566Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:41.864924 containerd[1497]: time="2024-12-13T13:20:41.864861350Z" level=info msg="RemovePodSandbox \"5181670da0d9588baf456216d436e00f505f60d7901fcf26fab125662ac1bef9\" returns successfully" Dec 13 13:20:41.869480 containerd[1497]: time="2024-12-13T13:20:41.869076916Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:20:41.869480 containerd[1497]: time="2024-12-13T13:20:41.869260852Z" level=info msg="TearDown network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" successfully" Dec 13 13:20:41.869480 containerd[1497]: time="2024-12-13T13:20:41.869279737Z" level=info msg="StopPodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" returns successfully" Dec 13 13:20:41.880901 kubelet[1807]: E1213 13:20:41.880793 1807 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:20:41.888055 containerd[1497]: time="2024-12-13T13:20:41.885833034Z" level=info msg="RemovePodSandbox for \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:20:41.888055 containerd[1497]: time="2024-12-13T13:20:41.885900550Z" level=info msg="Forcibly stopping sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\"" Dec 13 13:20:41.888055 containerd[1497]: time="2024-12-13T13:20:41.886037558Z" level=info msg="TearDown network for sandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" successfully" Dec 13 13:20:41.891434 containerd[1497]: time="2024-12-13T13:20:41.891347993Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:41.891434 containerd[1497]: time="2024-12-13T13:20:41.891435298Z" level=info msg="RemovePodSandbox \"b1979a837e2d1c66e44ee30b4493e4f3d29af9ae24912b9c53756fdca6ed1f58\" returns successfully" Dec 13 13:20:41.903516 containerd[1497]: time="2024-12-13T13:20:41.900224964Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" Dec 13 13:20:41.903516 containerd[1497]: time="2024-12-13T13:20:41.900437264Z" level=info msg="TearDown network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" successfully" Dec 13 13:20:41.903516 containerd[1497]: time="2024-12-13T13:20:41.900457011Z" level=info msg="StopPodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" returns successfully" Dec 13 13:20:41.903516 containerd[1497]: time="2024-12-13T13:20:41.901230295Z" level=info msg="RemovePodSandbox for \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" Dec 13 13:20:41.903516 containerd[1497]: time="2024-12-13T13:20:41.901258638Z" level=info msg="Forcibly stopping sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\"" Dec 13 13:20:41.903516 containerd[1497]: time="2024-12-13T13:20:41.901352946Z" level=info msg="TearDown network for sandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" successfully" Dec 13 13:20:41.917423 containerd[1497]: time="2024-12-13T13:20:41.917242082Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:41.917423 containerd[1497]: time="2024-12-13T13:20:41.917352930Z" level=info msg="RemovePodSandbox \"fd97b474445f1b4171451ca75c3249b2aa6622cab7ed3dd3e92378aac2adc1b2\" returns successfully" Dec 13 13:20:41.918927 containerd[1497]: time="2024-12-13T13:20:41.918867929Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\"" Dec 13 13:20:41.919029 containerd[1497]: time="2024-12-13T13:20:41.919015036Z" level=info msg="TearDown network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" successfully" Dec 13 13:20:41.919062 containerd[1497]: time="2024-12-13T13:20:41.919030205Z" level=info msg="StopPodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" returns successfully" Dec 13 13:20:41.919482 containerd[1497]: time="2024-12-13T13:20:41.919448311Z" level=info msg="RemovePodSandbox for \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\"" Dec 13 13:20:41.919539 containerd[1497]: time="2024-12-13T13:20:41.919488647Z" level=info msg="Forcibly stopping sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\"" Dec 13 13:20:41.919721 containerd[1497]: time="2024-12-13T13:20:41.919606669Z" level=info msg="TearDown network for sandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" successfully" Dec 13 13:20:41.926565 containerd[1497]: time="2024-12-13T13:20:41.926444896Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:41.926565 containerd[1497]: time="2024-12-13T13:20:41.926553831Z" level=info msg="RemovePodSandbox \"490c138960660889a13ffa5a5998670d1d357e3fdfae0ed3816b507aab994bd4\" returns successfully" Dec 13 13:20:41.935720 containerd[1497]: time="2024-12-13T13:20:41.935636369Z" level=info msg="StopPodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\"" Dec 13 13:20:41.936041 containerd[1497]: time="2024-12-13T13:20:41.935827659Z" level=info msg="TearDown network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" successfully" Dec 13 13:20:41.936041 containerd[1497]: time="2024-12-13T13:20:41.935859719Z" level=info msg="StopPodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" returns successfully" Dec 13 13:20:41.938626 containerd[1497]: time="2024-12-13T13:20:41.937489944Z" level=info msg="RemovePodSandbox for \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\"" Dec 13 13:20:41.938626 containerd[1497]: time="2024-12-13T13:20:41.937541041Z" level=info msg="Forcibly stopping sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\"" Dec 13 13:20:41.938626 containerd[1497]: time="2024-12-13T13:20:41.937697354Z" level=info msg="TearDown network for sandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" successfully" Dec 13 13:20:41.943722 containerd[1497]: time="2024-12-13T13:20:41.943567954Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:41.943722 containerd[1497]: time="2024-12-13T13:20:41.943696755Z" level=info msg="RemovePodSandbox \"aed468efa48fc229082956c20361999e7c649ede4a192f76c8733c0d24d405b9\" returns successfully" Dec 13 13:20:41.949528 containerd[1497]: time="2024-12-13T13:20:41.949182341Z" level=info msg="StopPodSandbox for \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\"" Dec 13 13:20:41.949528 containerd[1497]: time="2024-12-13T13:20:41.949391614Z" level=info msg="TearDown network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\" successfully" Dec 13 13:20:41.949528 containerd[1497]: time="2024-12-13T13:20:41.949412132Z" level=info msg="StopPodSandbox for \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\" returns successfully" Dec 13 13:20:41.950169 containerd[1497]: time="2024-12-13T13:20:41.950055201Z" level=info msg="RemovePodSandbox for \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\"" Dec 13 13:20:41.950169 containerd[1497]: time="2024-12-13T13:20:41.950098102Z" level=info msg="Forcibly stopping sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\"" Dec 13 13:20:41.950421 containerd[1497]: time="2024-12-13T13:20:41.950190025Z" level=info msg="TearDown network for sandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\" successfully" Dec 13 13:20:41.964617 containerd[1497]: time="2024-12-13T13:20:41.964513257Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:41.964854 containerd[1497]: time="2024-12-13T13:20:41.964649663Z" level=info msg="RemovePodSandbox \"c10e7185138d8e2075b85d37988ce7cbb462b86980ae0482189f81e46c483ff9\" returns successfully" Dec 13 13:20:41.965604 containerd[1497]: time="2024-12-13T13:20:41.965538245Z" level=info msg="StopPodSandbox for \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\"" Dec 13 13:20:41.965909 containerd[1497]: time="2024-12-13T13:20:41.965739122Z" level=info msg="TearDown network for sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\" successfully" Dec 13 13:20:41.965909 containerd[1497]: time="2024-12-13T13:20:41.965797612Z" level=info msg="StopPodSandbox for \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\" returns successfully" Dec 13 13:20:41.966633 containerd[1497]: time="2024-12-13T13:20:41.966482199Z" level=info msg="RemovePodSandbox for \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\"" Dec 13 13:20:41.966633 containerd[1497]: time="2024-12-13T13:20:41.966529048Z" level=info msg="Forcibly stopping sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\"" Dec 13 13:20:41.966753 containerd[1497]: time="2024-12-13T13:20:41.966711751Z" level=info msg="TearDown network for sandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\" successfully" Dec 13 13:20:41.972443 containerd[1497]: time="2024-12-13T13:20:41.972361074Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:41.972443 containerd[1497]: time="2024-12-13T13:20:41.972449720Z" level=info msg="RemovePodSandbox \"01031fc928e5c23b49fdb474a14d1c353344ab1b3d6ec1195d02e8bd0443427e\" returns successfully" Dec 13 13:20:41.973235 containerd[1497]: time="2024-12-13T13:20:41.973124259Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" Dec 13 13:20:41.973296 containerd[1497]: time="2024-12-13T13:20:41.973273038Z" level=info msg="TearDown network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" successfully" Dec 13 13:20:41.973296 containerd[1497]: time="2024-12-13T13:20:41.973287024Z" level=info msg="StopPodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" returns successfully" Dec 13 13:20:41.973801 containerd[1497]: time="2024-12-13T13:20:41.973737251Z" level=info msg="RemovePodSandbox for \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" Dec 13 13:20:41.973801 containerd[1497]: time="2024-12-13T13:20:41.973802474Z" level=info msg="Forcibly stopping sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\"" Dec 13 13:20:41.974048 containerd[1497]: time="2024-12-13T13:20:41.973957075Z" level=info msg="TearDown network for sandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" successfully" Dec 13 13:20:41.980359 containerd[1497]: time="2024-12-13T13:20:41.980262150Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:41.980359 containerd[1497]: time="2024-12-13T13:20:41.980363902Z" level=info msg="RemovePodSandbox \"f39a95b5ef18611eb8ace76ef304e5eccaddefdff13632e5002fe9dedaaf8223\" returns successfully" Dec 13 13:20:41.985218 containerd[1497]: time="2024-12-13T13:20:41.985128610Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\"" Dec 13 13:20:41.985379 containerd[1497]: time="2024-12-13T13:20:41.985317706Z" level=info msg="TearDown network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" successfully" Dec 13 13:20:41.985379 containerd[1497]: time="2024-12-13T13:20:41.985336181Z" level=info msg="StopPodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" returns successfully" Dec 13 13:20:42.005669 containerd[1497]: time="2024-12-13T13:20:41.998156257Z" level=info msg="RemovePodSandbox for \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\"" Dec 13 13:20:42.005669 containerd[1497]: time="2024-12-13T13:20:41.998212532Z" level=info msg="Forcibly stopping sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\"" Dec 13 13:20:42.005669 containerd[1497]: time="2024-12-13T13:20:41.998347236Z" level=info msg="TearDown network for sandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" successfully" Dec 13 13:20:42.013495 containerd[1497]: time="2024-12-13T13:20:42.013337579Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:42.013495 containerd[1497]: time="2024-12-13T13:20:42.013450180Z" level=info msg="RemovePodSandbox \"245be961e7e4bfe9213d4e6e31f2aa6d4b1457c41f76b09b4f6d1f02eff388d8\" returns successfully" Dec 13 13:20:42.014129 containerd[1497]: time="2024-12-13T13:20:42.014087017Z" level=info msg="StopPodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\"" Dec 13 13:20:42.014265 containerd[1497]: time="2024-12-13T13:20:42.014233233Z" level=info msg="TearDown network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" successfully" Dec 13 13:20:42.014265 containerd[1497]: time="2024-12-13T13:20:42.014254512Z" level=info msg="StopPodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" returns successfully" Dec 13 13:20:42.014605 containerd[1497]: time="2024-12-13T13:20:42.014558574Z" level=info msg="RemovePodSandbox for \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\"" Dec 13 13:20:42.014605 containerd[1497]: time="2024-12-13T13:20:42.014602216Z" level=info msg="Forcibly stopping sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\"" Dec 13 13:20:42.014748 containerd[1497]: time="2024-12-13T13:20:42.014686314Z" level=info msg="TearDown network for sandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" successfully" Dec 13 13:20:42.022180 containerd[1497]: time="2024-12-13T13:20:42.022086176Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:42.022180 containerd[1497]: time="2024-12-13T13:20:42.022174372Z" level=info msg="RemovePodSandbox \"c9b83eaef8a89a15a1cab135efaa53e3589caa4ea69b73dba41459e7be7db33d\" returns successfully" Dec 13 13:20:42.022919 containerd[1497]: time="2024-12-13T13:20:42.022873937Z" level=info msg="StopPodSandbox for \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\"" Dec 13 13:20:42.023059 containerd[1497]: time="2024-12-13T13:20:42.023020342Z" level=info msg="TearDown network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\" successfully" Dec 13 13:20:42.023059 containerd[1497]: time="2024-12-13T13:20:42.023041903Z" level=info msg="StopPodSandbox for \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\" returns successfully" Dec 13 13:20:42.023413 containerd[1497]: time="2024-12-13T13:20:42.023387492Z" level=info msg="RemovePodSandbox for \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\"" Dec 13 13:20:42.023468 containerd[1497]: time="2024-12-13T13:20:42.023416978Z" level=info msg="Forcibly stopping sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\"" Dec 13 13:20:42.023594 containerd[1497]: time="2024-12-13T13:20:42.023525632Z" level=info msg="TearDown network for sandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\" successfully" Dec 13 13:20:42.027986 containerd[1497]: time="2024-12-13T13:20:42.027911548Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:42.027986 containerd[1497]: time="2024-12-13T13:20:42.027991047Z" level=info msg="RemovePodSandbox \"0685eff49198f41de56e18476f31c876f26b4cf1cf6bc05de716df980b71c88a\" returns successfully" Dec 13 13:20:42.028565 containerd[1497]: time="2024-12-13T13:20:42.028520332Z" level=info msg="StopPodSandbox for \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\"" Dec 13 13:20:42.028713 containerd[1497]: time="2024-12-13T13:20:42.028677287Z" level=info msg="TearDown network for sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\" successfully" Dec 13 13:20:42.028713 containerd[1497]: time="2024-12-13T13:20:42.028705650Z" level=info msg="StopPodSandbox for \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\" returns successfully" Dec 13 13:20:42.029040 containerd[1497]: time="2024-12-13T13:20:42.028998091Z" level=info msg="RemovePodSandbox for \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\"" Dec 13 13:20:42.029040 containerd[1497]: time="2024-12-13T13:20:42.029037805Z" level=info msg="Forcibly stopping sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\"" Dec 13 13:20:42.029186 containerd[1497]: time="2024-12-13T13:20:42.029128465Z" level=info msg="TearDown network for sandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\" successfully" Dec 13 13:20:42.034173 containerd[1497]: time="2024-12-13T13:20:42.034027756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:20:42.034355 containerd[1497]: time="2024-12-13T13:20:42.034197425Z" level=info msg="RemovePodSandbox \"06786ad6aaa6a841bee57a4af8e6b8dc277124d5f6c65bea1ef57e10828fee05\" returns successfully"