Dec 13 13:27:39.045559 kernel: Linux version 6.6.65-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri Dec 13 11:52:04 -00 2024 Dec 13 13:27:39.045627 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:39.045643 kernel: BIOS-provided physical RAM map: Dec 13 13:27:39.045661 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Dec 13 13:27:39.045671 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Dec 13 13:27:39.045682 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Dec 13 13:27:39.045694 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Dec 13 13:27:39.045705 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Dec 13 13:27:39.045716 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Dec 13 13:27:39.045727 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Dec 13 13:27:39.045738 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Dec 13 13:27:39.045749 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Dec 13 13:27:39.045764 kernel: NX (Execute Disable) protection: active Dec 13 13:27:39.045776 kernel: APIC: Static calls initialized Dec 13 13:27:39.045789 kernel: SMBIOS 2.8 present. Dec 13 13:27:39.045801 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Dec 13 13:27:39.045813 kernel: Hypervisor detected: KVM Dec 13 13:27:39.045829 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Dec 13 13:27:39.045841 kernel: kvm-clock: using sched offset of 4510161794 cycles Dec 13 13:27:39.045857 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Dec 13 13:27:39.045869 kernel: tsc: Detected 2499.998 MHz processor Dec 13 13:27:39.045881 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Dec 13 13:27:39.045893 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Dec 13 13:27:39.045905 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Dec 13 13:27:39.045919 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Dec 13 13:27:39.045931 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Dec 13 13:27:39.045948 kernel: Using GB pages for direct mapping Dec 13 13:27:39.045960 kernel: ACPI: Early table checksum verification disabled Dec 13 13:27:39.045972 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Dec 13 13:27:39.045984 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:39.045996 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:39.046008 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:39.046019 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Dec 13 13:27:39.046031 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:39.046043 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:39.046059 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:39.046072 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Dec 13 13:27:39.046084 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Dec 13 13:27:39.046096 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Dec 13 13:27:39.046108 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Dec 13 13:27:39.046126 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Dec 13 13:27:39.046138 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Dec 13 13:27:39.046155 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Dec 13 13:27:39.046168 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Dec 13 13:27:39.046180 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Dec 13 13:27:39.046193 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Dec 13 13:27:39.046205 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Dec 13 13:27:39.046217 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Dec 13 13:27:39.046229 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Dec 13 13:27:39.046242 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Dec 13 13:27:39.046259 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Dec 13 13:27:39.046271 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Dec 13 13:27:39.046283 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Dec 13 13:27:39.046295 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Dec 13 13:27:39.046307 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Dec 13 13:27:39.046320 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Dec 13 13:27:39.046332 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Dec 13 13:27:39.046344 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Dec 13 13:27:39.046356 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Dec 13 13:27:39.046373 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Dec 13 13:27:39.046386 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Dec 13 13:27:39.046398 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Dec 13 13:27:39.046411 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Dec 13 13:27:39.046423 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Dec 13 13:27:39.046448 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Dec 13 13:27:39.046460 kernel: Zone ranges: Dec 13 13:27:39.046473 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Dec 13 13:27:39.046485 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Dec 13 13:27:39.046496 kernel: Normal empty Dec 13 13:27:39.046516 kernel: Movable zone start for each node Dec 13 13:27:39.046528 kernel: Early memory node ranges Dec 13 13:27:39.046540 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Dec 13 13:27:39.046663 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Dec 13 13:27:39.046680 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Dec 13 13:27:39.046693 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Dec 13 13:27:39.046706 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Dec 13 13:27:39.046718 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Dec 13 13:27:39.046730 kernel: ACPI: PM-Timer IO Port: 0x608 Dec 13 13:27:39.046750 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Dec 13 13:27:39.046763 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Dec 13 13:27:39.046776 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Dec 13 13:27:39.046788 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Dec 13 13:27:39.046801 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Dec 13 13:27:39.046813 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Dec 13 13:27:39.046826 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Dec 13 13:27:39.046838 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Dec 13 13:27:39.046851 kernel: TSC deadline timer available Dec 13 13:27:39.046869 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Dec 13 13:27:39.046882 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Dec 13 13:27:39.046894 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Dec 13 13:27:39.046912 kernel: Booting paravirtualized kernel on KVM Dec 13 13:27:39.046924 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Dec 13 13:27:39.046937 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Dec 13 13:27:39.046950 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Dec 13 13:27:39.046962 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Dec 13 13:27:39.046975 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Dec 13 13:27:39.046992 kernel: kvm-guest: PV spinlocks enabled Dec 13 13:27:39.047005 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Dec 13 13:27:39.047019 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:39.047032 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Dec 13 13:27:39.047045 kernel: random: crng init done Dec 13 13:27:39.047057 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Dec 13 13:27:39.047080 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Dec 13 13:27:39.047093 kernel: Fallback order for Node 0: 0 Dec 13 13:27:39.047110 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Dec 13 13:27:39.047122 kernel: Policy zone: DMA32 Dec 13 13:27:39.047135 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Dec 13 13:27:39.047147 kernel: software IO TLB: area num 16. Dec 13 13:27:39.047160 kernel: Memory: 1899480K/2096616K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43328K init, 1748K bss, 196876K reserved, 0K cma-reserved) Dec 13 13:27:39.047173 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Dec 13 13:27:39.047186 kernel: Kernel/User page tables isolation: enabled Dec 13 13:27:39.047198 kernel: ftrace: allocating 37874 entries in 148 pages Dec 13 13:27:39.047223 kernel: ftrace: allocated 148 pages with 3 groups Dec 13 13:27:39.047240 kernel: Dynamic Preempt: voluntary Dec 13 13:27:39.047252 kernel: rcu: Preemptible hierarchical RCU implementation. Dec 13 13:27:39.047265 kernel: rcu: RCU event tracing is enabled. Dec 13 13:27:39.047290 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Dec 13 13:27:39.047302 kernel: Trampoline variant of Tasks RCU enabled. Dec 13 13:27:39.047326 kernel: Rude variant of Tasks RCU enabled. Dec 13 13:27:39.047342 kernel: Tracing variant of Tasks RCU enabled. Dec 13 13:27:39.047354 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Dec 13 13:27:39.047367 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Dec 13 13:27:39.047379 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Dec 13 13:27:39.047391 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Dec 13 13:27:39.047403 kernel: Console: colour VGA+ 80x25 Dec 13 13:27:39.047421 kernel: printk: console [tty0] enabled Dec 13 13:27:39.047433 kernel: printk: console [ttyS0] enabled Dec 13 13:27:39.047445 kernel: ACPI: Core revision 20230628 Dec 13 13:27:39.047458 kernel: APIC: Switch to symmetric I/O mode setup Dec 13 13:27:39.047470 kernel: x2apic enabled Dec 13 13:27:39.047486 kernel: APIC: Switched APIC routing to: physical x2apic Dec 13 13:27:39.047499 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 13:27:39.047515 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Dec 13 13:27:39.047528 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Dec 13 13:27:39.047540 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Dec 13 13:27:39.047564 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Dec 13 13:27:39.047636 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Dec 13 13:27:39.047654 kernel: Spectre V2 : Mitigation: Retpolines Dec 13 13:27:39.047667 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Dec 13 13:27:39.047680 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Dec 13 13:27:39.047700 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Dec 13 13:27:39.047713 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Dec 13 13:27:39.047726 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Dec 13 13:27:39.047739 kernel: MDS: Mitigation: Clear CPU buffers Dec 13 13:27:39.047752 kernel: MMIO Stale Data: Unknown: No mitigations Dec 13 13:27:39.047765 kernel: SRBDS: Unknown: Dependent on hypervisor status Dec 13 13:27:39.047778 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Dec 13 13:27:39.047791 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Dec 13 13:27:39.047804 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Dec 13 13:27:39.047817 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Dec 13 13:27:39.047830 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Dec 13 13:27:39.047849 kernel: Freeing SMP alternatives memory: 32K Dec 13 13:27:39.047862 kernel: pid_max: default: 32768 minimum: 301 Dec 13 13:27:39.047875 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Dec 13 13:27:39.047888 kernel: landlock: Up and running. Dec 13 13:27:39.047901 kernel: SELinux: Initializing. Dec 13 13:27:39.047914 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:27:39.047927 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Dec 13 13:27:39.047940 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Dec 13 13:27:39.047953 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:27:39.047966 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:27:39.047984 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Dec 13 13:27:39.047998 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Dec 13 13:27:39.048011 kernel: signal: max sigframe size: 1776 Dec 13 13:27:39.048024 kernel: rcu: Hierarchical SRCU implementation. Dec 13 13:27:39.048038 kernel: rcu: Max phase no-delay instances is 400. Dec 13 13:27:39.048051 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Dec 13 13:27:39.048064 kernel: smp: Bringing up secondary CPUs ... Dec 13 13:27:39.048077 kernel: smpboot: x86: Booting SMP configuration: Dec 13 13:27:39.048090 kernel: .... node #0, CPUs: #1 Dec 13 13:27:39.048108 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Dec 13 13:27:39.048121 kernel: smp: Brought up 1 node, 2 CPUs Dec 13 13:27:39.048134 kernel: smpboot: Max logical packages: 16 Dec 13 13:27:39.048147 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Dec 13 13:27:39.048160 kernel: devtmpfs: initialized Dec 13 13:27:39.048173 kernel: x86/mm: Memory block size: 128MB Dec 13 13:27:39.048186 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Dec 13 13:27:39.048199 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Dec 13 13:27:39.048212 kernel: pinctrl core: initialized pinctrl subsystem Dec 13 13:27:39.048230 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Dec 13 13:27:39.048243 kernel: audit: initializing netlink subsys (disabled) Dec 13 13:27:39.048269 kernel: audit: type=2000 audit(1734096457.829:1): state=initialized audit_enabled=0 res=1 Dec 13 13:27:39.048281 kernel: thermal_sys: Registered thermal governor 'step_wise' Dec 13 13:27:39.048293 kernel: thermal_sys: Registered thermal governor 'user_space' Dec 13 13:27:39.048306 kernel: cpuidle: using governor menu Dec 13 13:27:39.048319 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Dec 13 13:27:39.048331 kernel: dca service started, version 1.12.1 Dec 13 13:27:39.048344 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Dec 13 13:27:39.048361 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Dec 13 13:27:39.048374 kernel: PCI: Using configuration type 1 for base access Dec 13 13:27:39.048387 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Dec 13 13:27:39.048399 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Dec 13 13:27:39.048424 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Dec 13 13:27:39.048436 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Dec 13 13:27:39.048448 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Dec 13 13:27:39.048460 kernel: ACPI: Added _OSI(Module Device) Dec 13 13:27:39.048472 kernel: ACPI: Added _OSI(Processor Device) Dec 13 13:27:39.048489 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Dec 13 13:27:39.048501 kernel: ACPI: Added _OSI(Processor Aggregator Device) Dec 13 13:27:39.048513 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Dec 13 13:27:39.048525 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Dec 13 13:27:39.048545 kernel: ACPI: Interpreter enabled Dec 13 13:27:39.048568 kernel: ACPI: PM: (supports S0 S5) Dec 13 13:27:39.052485 kernel: ACPI: Using IOAPIC for interrupt routing Dec 13 13:27:39.052504 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Dec 13 13:27:39.052530 kernel: PCI: Using E820 reservations for host bridge windows Dec 13 13:27:39.052564 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Dec 13 13:27:39.052603 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Dec 13 13:27:39.052901 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Dec 13 13:27:39.053109 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Dec 13 13:27:39.053291 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Dec 13 13:27:39.053318 kernel: PCI host bridge to bus 0000:00 Dec 13 13:27:39.053519 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Dec 13 13:27:39.053732 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Dec 13 13:27:39.053911 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Dec 13 13:27:39.054087 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Dec 13 13:27:39.054259 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Dec 13 13:27:39.054420 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Dec 13 13:27:39.055688 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Dec 13 13:27:39.055896 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Dec 13 13:27:39.056129 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Dec 13 13:27:39.056316 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Dec 13 13:27:39.056490 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Dec 13 13:27:39.058496 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Dec 13 13:27:39.058749 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Dec 13 13:27:39.058949 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Dec 13 13:27:39.059153 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Dec 13 13:27:39.059359 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Dec 13 13:27:39.059530 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Dec 13 13:27:39.061140 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Dec 13 13:27:39.061356 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Dec 13 13:27:39.061556 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Dec 13 13:27:39.061767 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Dec 13 13:27:39.061977 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Dec 13 13:27:39.062159 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Dec 13 13:27:39.062384 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Dec 13 13:27:39.063694 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Dec 13 13:27:39.063908 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Dec 13 13:27:39.064097 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Dec 13 13:27:39.064328 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Dec 13 13:27:39.064508 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Dec 13 13:27:39.065772 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Dec 13 13:27:39.065965 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Dec 13 13:27:39.066133 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Dec 13 13:27:39.066299 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Dec 13 13:27:39.066494 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Dec 13 13:27:39.068117 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Dec 13 13:27:39.068305 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Dec 13 13:27:39.068487 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Dec 13 13:27:39.069737 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Dec 13 13:27:39.069940 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Dec 13 13:27:39.070104 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Dec 13 13:27:39.070286 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Dec 13 13:27:39.070466 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Dec 13 13:27:39.071721 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Dec 13 13:27:39.071915 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Dec 13 13:27:39.072089 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Dec 13 13:27:39.072304 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Dec 13 13:27:39.072507 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Dec 13 13:27:39.072738 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 13:27:39.072913 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 13:27:39.073093 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:27:39.073277 kernel: pci_bus 0000:02: extended config space not accessible Dec 13 13:27:39.073471 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Dec 13 13:27:39.075732 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Dec 13 13:27:39.075931 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 13:27:39.076126 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 13:27:39.076323 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Dec 13 13:27:39.076501 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Dec 13 13:27:39.077735 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 13:27:39.077929 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 13:27:39.078144 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:27:39.078394 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Dec 13 13:27:39.080623 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Dec 13 13:27:39.080809 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 13:27:39.080982 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 13:27:39.081171 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:27:39.081366 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 13:27:39.081535 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 13:27:39.081741 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:27:39.081926 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 13:27:39.082098 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 13:27:39.082293 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:27:39.082503 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 13:27:39.084715 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 13:27:39.084889 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:27:39.085073 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 13:27:39.085260 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 13:27:39.085449 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:27:39.085700 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 13:27:39.085870 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 13:27:39.086047 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:27:39.086067 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Dec 13 13:27:39.086082 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Dec 13 13:27:39.086095 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Dec 13 13:27:39.086109 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Dec 13 13:27:39.086130 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Dec 13 13:27:39.086149 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Dec 13 13:27:39.086163 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Dec 13 13:27:39.086176 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Dec 13 13:27:39.086189 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Dec 13 13:27:39.086202 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Dec 13 13:27:39.086215 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Dec 13 13:27:39.086229 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Dec 13 13:27:39.086254 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Dec 13 13:27:39.086271 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Dec 13 13:27:39.086283 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Dec 13 13:27:39.086295 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Dec 13 13:27:39.086307 kernel: iommu: Default domain type: Translated Dec 13 13:27:39.086319 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Dec 13 13:27:39.086331 kernel: PCI: Using ACPI for IRQ routing Dec 13 13:27:39.086343 kernel: PCI: pci_cache_line_size set to 64 bytes Dec 13 13:27:39.086355 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Dec 13 13:27:39.086367 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Dec 13 13:27:39.086523 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Dec 13 13:27:39.088743 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Dec 13 13:27:39.088937 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Dec 13 13:27:39.088957 kernel: vgaarb: loaded Dec 13 13:27:39.088970 kernel: clocksource: Switched to clocksource kvm-clock Dec 13 13:27:39.089001 kernel: VFS: Disk quotas dquot_6.6.0 Dec 13 13:27:39.089015 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Dec 13 13:27:39.089027 kernel: pnp: PnP ACPI init Dec 13 13:27:39.089234 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Dec 13 13:27:39.089256 kernel: pnp: PnP ACPI: found 5 devices Dec 13 13:27:39.089270 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Dec 13 13:27:39.089292 kernel: NET: Registered PF_INET protocol family Dec 13 13:27:39.089305 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Dec 13 13:27:39.089319 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Dec 13 13:27:39.089332 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Dec 13 13:27:39.089345 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Dec 13 13:27:39.089378 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Dec 13 13:27:39.089391 kernel: TCP: Hash tables configured (established 16384 bind 16384) Dec 13 13:27:39.089403 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:27:39.089416 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Dec 13 13:27:39.089428 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Dec 13 13:27:39.089453 kernel: NET: Registered PF_XDP protocol family Dec 13 13:27:39.089690 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Dec 13 13:27:39.089875 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Dec 13 13:27:39.090057 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Dec 13 13:27:39.090247 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Dec 13 13:27:39.090418 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Dec 13 13:27:39.091660 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Dec 13 13:27:39.091837 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Dec 13 13:27:39.092022 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Dec 13 13:27:39.092233 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Dec 13 13:27:39.092409 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Dec 13 13:27:39.092609 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Dec 13 13:27:39.092786 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Dec 13 13:27:39.092956 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Dec 13 13:27:39.093145 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Dec 13 13:27:39.093330 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Dec 13 13:27:39.093513 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Dec 13 13:27:39.096732 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Dec 13 13:27:39.096923 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Dec 13 13:27:39.097107 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Dec 13 13:27:39.097309 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Dec 13 13:27:39.097479 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Dec 13 13:27:39.097692 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:27:39.097887 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Dec 13 13:27:39.098059 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Dec 13 13:27:39.098267 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Dec 13 13:27:39.098453 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:27:39.100756 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Dec 13 13:27:39.100968 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Dec 13 13:27:39.101167 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Dec 13 13:27:39.101368 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:27:39.101639 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Dec 13 13:27:39.101815 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Dec 13 13:27:39.101992 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Dec 13 13:27:39.102173 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:27:39.102355 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Dec 13 13:27:39.102535 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Dec 13 13:27:39.104156 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Dec 13 13:27:39.104361 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:27:39.104548 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Dec 13 13:27:39.104771 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Dec 13 13:27:39.104973 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Dec 13 13:27:39.105175 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:27:39.105368 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Dec 13 13:27:39.105547 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Dec 13 13:27:39.105796 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Dec 13 13:27:39.105971 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:27:39.106163 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Dec 13 13:27:39.106333 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Dec 13 13:27:39.106504 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Dec 13 13:27:39.106709 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:27:39.106889 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Dec 13 13:27:39.107059 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Dec 13 13:27:39.107228 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Dec 13 13:27:39.107413 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Dec 13 13:27:39.107659 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Dec 13 13:27:39.107818 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Dec 13 13:27:39.107996 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Dec 13 13:27:39.108160 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Dec 13 13:27:39.108331 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Dec 13 13:27:39.108509 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Dec 13 13:27:39.108819 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Dec 13 13:27:39.108984 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Dec 13 13:27:39.109169 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Dec 13 13:27:39.109344 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Dec 13 13:27:39.109506 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Dec 13 13:27:39.109714 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Dec 13 13:27:39.109901 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Dec 13 13:27:39.110076 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Dec 13 13:27:39.110246 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Dec 13 13:27:39.110431 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Dec 13 13:27:39.110621 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Dec 13 13:27:39.110789 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Dec 13 13:27:39.110979 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Dec 13 13:27:39.111177 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Dec 13 13:27:39.111341 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Dec 13 13:27:39.111550 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Dec 13 13:27:39.111766 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Dec 13 13:27:39.111932 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Dec 13 13:27:39.112108 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Dec 13 13:27:39.112273 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Dec 13 13:27:39.112445 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Dec 13 13:27:39.112468 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Dec 13 13:27:39.112483 kernel: PCI: CLS 0 bytes, default 64 Dec 13 13:27:39.112506 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Dec 13 13:27:39.112520 kernel: software IO TLB: mapped [mem 0x0000000073e00000-0x0000000077e00000] (64MB) Dec 13 13:27:39.112534 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Dec 13 13:27:39.112583 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Dec 13 13:27:39.112599 kernel: Initialise system trusted keyrings Dec 13 13:27:39.112620 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Dec 13 13:27:39.112640 kernel: Key type asymmetric registered Dec 13 13:27:39.112653 kernel: Asymmetric key parser 'x509' registered Dec 13 13:27:39.112667 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Dec 13 13:27:39.112681 kernel: io scheduler mq-deadline registered Dec 13 13:27:39.112695 kernel: io scheduler kyber registered Dec 13 13:27:39.112709 kernel: io scheduler bfq registered Dec 13 13:27:39.112882 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Dec 13 13:27:39.113068 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Dec 13 13:27:39.113254 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:27:39.113438 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Dec 13 13:27:39.113662 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Dec 13 13:27:39.113837 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:27:39.114012 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Dec 13 13:27:39.114195 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Dec 13 13:27:39.114409 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:27:39.114663 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Dec 13 13:27:39.114838 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Dec 13 13:27:39.115010 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:27:39.115191 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Dec 13 13:27:39.115367 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Dec 13 13:27:39.115566 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:27:39.115797 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Dec 13 13:27:39.115968 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Dec 13 13:27:39.116150 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:27:39.116334 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Dec 13 13:27:39.116503 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Dec 13 13:27:39.116719 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:27:39.116896 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Dec 13 13:27:39.117067 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Dec 13 13:27:39.117238 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Dec 13 13:27:39.117260 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Dec 13 13:27:39.117275 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Dec 13 13:27:39.117297 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Dec 13 13:27:39.117312 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Dec 13 13:27:39.117326 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Dec 13 13:27:39.117340 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Dec 13 13:27:39.117353 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Dec 13 13:27:39.117368 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Dec 13 13:27:39.117382 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Dec 13 13:27:39.117598 kernel: rtc_cmos 00:03: RTC can wake from S4 Dec 13 13:27:39.117766 kernel: rtc_cmos 00:03: registered as rtc0 Dec 13 13:27:39.117936 kernel: rtc_cmos 00:03: setting system clock to 2024-12-13T13:27:38 UTC (1734096458) Dec 13 13:27:39.118106 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Dec 13 13:27:39.118127 kernel: intel_pstate: CPU model not supported Dec 13 13:27:39.118141 kernel: NET: Registered PF_INET6 protocol family Dec 13 13:27:39.118155 kernel: Segment Routing with IPv6 Dec 13 13:27:39.118169 kernel: In-situ OAM (IOAM) with IPv6 Dec 13 13:27:39.118183 kernel: NET: Registered PF_PACKET protocol family Dec 13 13:27:39.118197 kernel: Key type dns_resolver registered Dec 13 13:27:39.118217 kernel: IPI shorthand broadcast: enabled Dec 13 13:27:39.118232 kernel: sched_clock: Marking stable (1303037704, 235825038)->(1667780967, -128918225) Dec 13 13:27:39.118246 kernel: registered taskstats version 1 Dec 13 13:27:39.118259 kernel: Loading compiled-in X.509 certificates Dec 13 13:27:39.118273 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.65-flatcar: 87a680e70013684f1bdd04e047addefc714bd162' Dec 13 13:27:39.118287 kernel: Key type .fscrypt registered Dec 13 13:27:39.118300 kernel: Key type fscrypt-provisioning registered Dec 13 13:27:39.118314 kernel: ima: No TPM chip found, activating TPM-bypass! Dec 13 13:27:39.118328 kernel: ima: Allocated hash algorithm: sha1 Dec 13 13:27:39.118347 kernel: ima: No architecture policies found Dec 13 13:27:39.118365 kernel: clk: Disabling unused clocks Dec 13 13:27:39.118379 kernel: Freeing unused kernel image (initmem) memory: 43328K Dec 13 13:27:39.118393 kernel: Write protecting the kernel read-only data: 38912k Dec 13 13:27:39.118407 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Dec 13 13:27:39.118421 kernel: Run /init as init process Dec 13 13:27:39.118446 kernel: with arguments: Dec 13 13:27:39.118466 kernel: /init Dec 13 13:27:39.118479 kernel: with environment: Dec 13 13:27:39.118509 kernel: HOME=/ Dec 13 13:27:39.118530 kernel: TERM=linux Dec 13 13:27:39.118543 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Dec 13 13:27:39.118593 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:39.118615 systemd[1]: Detected virtualization kvm. Dec 13 13:27:39.118631 systemd[1]: Detected architecture x86-64. Dec 13 13:27:39.118645 systemd[1]: Running in initrd. Dec 13 13:27:39.118660 systemd[1]: No hostname configured, using default hostname. Dec 13 13:27:39.118682 systemd[1]: Hostname set to . Dec 13 13:27:39.118697 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:39.118712 systemd[1]: Queued start job for default target initrd.target. Dec 13 13:27:39.118727 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:39.118741 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:39.118757 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Dec 13 13:27:39.118773 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:39.118788 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Dec 13 13:27:39.118808 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Dec 13 13:27:39.118825 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Dec 13 13:27:39.118840 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Dec 13 13:27:39.118862 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:39.118877 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:39.118892 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:39.118907 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:39.118935 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:39.118963 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:39.118979 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:39.118994 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:39.119009 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Dec 13 13:27:39.119024 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Dec 13 13:27:39.119039 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:39.119054 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:39.119075 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:39.119090 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:39.119105 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Dec 13 13:27:39.119120 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:39.119135 systemd[1]: Finished network-cleanup.service - Network Cleanup. Dec 13 13:27:39.119150 systemd[1]: Starting systemd-fsck-usr.service... Dec 13 13:27:39.119165 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:39.119180 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:39.119195 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:39.119216 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:39.119275 systemd-journald[202]: Collecting audit messages is disabled. Dec 13 13:27:39.119310 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:39.119325 systemd[1]: Finished systemd-fsck-usr.service. Dec 13 13:27:39.119347 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Dec 13 13:27:39.119362 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Dec 13 13:27:39.119377 kernel: Bridge firewalling registered Dec 13 13:27:39.119399 systemd-journald[202]: Journal started Dec 13 13:27:39.119434 systemd-journald[202]: Runtime Journal (/run/log/journal/6fce2a4ef47e4048b364dcdb48b7ef9a) is 4.7M, max 37.9M, 33.2M free. Dec 13 13:27:39.063196 systemd-modules-load[203]: Inserted module 'overlay' Dec 13 13:27:39.161342 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:39.103298 systemd-modules-load[203]: Inserted module 'br_netfilter' Dec 13 13:27:39.165266 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:39.166236 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:39.174777 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:39.189845 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:39.194751 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:39.195832 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Dec 13 13:27:39.208831 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:39.211230 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:39.219909 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:39.225776 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Dec 13 13:27:39.226815 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:39.242759 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:39.244887 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:39.254018 dracut-cmdline[234]: dracut-dracut-053 Dec 13 13:27:39.261276 dracut-cmdline[234]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=7e85177266c631d417c820ba09a3204c451316d6fcf9e4e21017322aee9df3f4 Dec 13 13:27:39.296488 systemd-resolved[237]: Positive Trust Anchors: Dec 13 13:27:39.296519 systemd-resolved[237]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:39.296602 systemd-resolved[237]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:39.300900 systemd-resolved[237]: Defaulting to hostname 'linux'. Dec 13 13:27:39.303101 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:39.305328 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:39.374662 kernel: SCSI subsystem initialized Dec 13 13:27:39.386583 kernel: Loading iSCSI transport class v2.0-870. Dec 13 13:27:39.399618 kernel: iscsi: registered transport (tcp) Dec 13 13:27:39.425801 kernel: iscsi: registered transport (qla4xxx) Dec 13 13:27:39.425841 kernel: QLogic iSCSI HBA Driver Dec 13 13:27:39.483748 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:39.492762 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Dec 13 13:27:39.526393 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Dec 13 13:27:39.526434 kernel: device-mapper: uevent: version 1.0.3 Dec 13 13:27:39.527246 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Dec 13 13:27:39.580662 kernel: raid6: sse2x4 gen() 12793 MB/s Dec 13 13:27:39.598612 kernel: raid6: sse2x2 gen() 8853 MB/s Dec 13 13:27:39.617300 kernel: raid6: sse2x1 gen() 8992 MB/s Dec 13 13:27:39.617349 kernel: raid6: using algorithm sse2x4 gen() 12793 MB/s Dec 13 13:27:39.636298 kernel: raid6: .... xor() 7589 MB/s, rmw enabled Dec 13 13:27:39.636339 kernel: raid6: using ssse3x2 recovery algorithm Dec 13 13:27:39.664616 kernel: xor: automatically using best checksumming function avx Dec 13 13:27:39.837603 kernel: Btrfs loaded, zoned=no, fsverity=no Dec 13 13:27:39.853377 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:39.860794 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:39.895944 systemd-udevd[421]: Using default interface naming scheme 'v255'. Dec 13 13:27:39.903717 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:39.912729 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Dec 13 13:27:39.936424 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Dec 13 13:27:39.979727 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:39.986744 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:40.105819 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:40.114942 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Dec 13 13:27:40.149216 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:40.152207 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:40.153064 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:40.155488 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:40.166759 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Dec 13 13:27:40.194787 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:40.234234 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Dec 13 13:27:40.309425 kernel: cryptd: max_cpu_qlen set to 1000 Dec 13 13:27:40.309455 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Dec 13 13:27:40.309705 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Dec 13 13:27:40.309729 kernel: GPT:17805311 != 125829119 Dec 13 13:27:40.309757 kernel: GPT:Alternate GPT header not at the end of the disk. Dec 13 13:27:40.309777 kernel: GPT:17805311 != 125829119 Dec 13 13:27:40.309796 kernel: GPT: Use GNU Parted to correct GPT errors. Dec 13 13:27:40.309826 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:40.309844 kernel: AVX version of gcm_enc/dec engaged. Dec 13 13:27:40.309862 kernel: AES CTR mode by8 optimization enabled Dec 13 13:27:40.294583 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:40.294796 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:40.297408 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:40.298163 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:40.300643 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:40.321093 kernel: libata version 3.00 loaded. Dec 13 13:27:40.301382 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:40.312886 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:40.331617 kernel: ACPI: bus type USB registered Dec 13 13:27:40.337618 kernel: usbcore: registered new interface driver usbfs Dec 13 13:27:40.345913 kernel: usbcore: registered new interface driver hub Dec 13 13:27:40.345970 kernel: usbcore: registered new device driver usb Dec 13 13:27:40.391581 kernel: ahci 0000:00:1f.2: version 3.0 Dec 13 13:27:40.510172 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Dec 13 13:27:40.510203 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Dec 13 13:27:40.510458 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Dec 13 13:27:40.510715 kernel: scsi host0: ahci Dec 13 13:27:40.511031 kernel: scsi host1: ahci Dec 13 13:27:40.511246 kernel: BTRFS: device fsid 79c74448-2326-4c98-b9ff-09542b30ea52 devid 1 transid 36 /dev/vda3 scanned by (udev-worker) (468) Dec 13 13:27:40.511269 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 13:27:40.511497 kernel: scsi host2: ahci Dec 13 13:27:40.511737 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (479) Dec 13 13:27:40.511760 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Dec 13 13:27:40.511981 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Dec 13 13:27:40.512196 kernel: scsi host3: ahci Dec 13 13:27:40.512409 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Dec 13 13:27:40.512762 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Dec 13 13:27:40.513229 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Dec 13 13:27:40.513452 kernel: scsi host4: ahci Dec 13 13:27:40.513710 kernel: hub 1-0:1.0: USB hub found Dec 13 13:27:40.513964 kernel: hub 1-0:1.0: 4 ports detected Dec 13 13:27:40.514213 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Dec 13 13:27:40.514509 kernel: hub 2-0:1.0: USB hub found Dec 13 13:27:40.514789 kernel: hub 2-0:1.0: 4 ports detected Dec 13 13:27:40.515012 kernel: scsi host5: ahci Dec 13 13:27:40.515223 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Dec 13 13:27:40.515253 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Dec 13 13:27:40.515273 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Dec 13 13:27:40.515291 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Dec 13 13:27:40.515309 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Dec 13 13:27:40.515327 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Dec 13 13:27:40.422945 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:40.435891 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Dec 13 13:27:40.472012 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:40.485216 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Dec 13 13:27:40.507607 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Dec 13 13:27:40.522107 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Dec 13 13:27:40.523923 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Dec 13 13:27:40.532691 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:40.539734 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Dec 13 13:27:40.549206 disk-uuid[571]: Primary Header is updated. Dec 13 13:27:40.549206 disk-uuid[571]: Secondary Entries is updated. Dec 13 13:27:40.549206 disk-uuid[571]: Secondary Header is updated. Dec 13 13:27:40.555596 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:40.562589 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:40.730586 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Dec 13 13:27:40.819597 kernel: ata2: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:40.819712 kernel: ata3: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:40.822365 kernel: ata1: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:40.827601 kernel: ata5: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:40.827647 kernel: ata4: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:40.829802 kernel: ata6: SATA link down (SStatus 0 SControl 300) Dec 13 13:27:40.871571 kernel: hid: raw HID events driver (C) Jiri Kosina Dec 13 13:27:40.877829 kernel: usbcore: registered new interface driver usbhid Dec 13 13:27:40.877873 kernel: usbhid: USB HID core driver Dec 13 13:27:40.886359 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Dec 13 13:27:40.886413 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Dec 13 13:27:41.563586 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Dec 13 13:27:41.564798 disk-uuid[572]: The operation has completed successfully. Dec 13 13:27:41.626298 systemd[1]: disk-uuid.service: Deactivated successfully. Dec 13 13:27:41.626472 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Dec 13 13:27:41.644745 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Dec 13 13:27:41.658083 sh[583]: Success Dec 13 13:27:41.677637 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Dec 13 13:27:41.740237 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Dec 13 13:27:41.748850 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Dec 13 13:27:41.757120 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Dec 13 13:27:41.777581 kernel: BTRFS info (device dm-0): first mount of filesystem 79c74448-2326-4c98-b9ff-09542b30ea52 Dec 13 13:27:41.777672 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:41.777695 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Dec 13 13:27:41.779962 kernel: BTRFS info (device dm-0): disabling log replay at mount time Dec 13 13:27:41.781609 kernel: BTRFS info (device dm-0): using free space tree Dec 13 13:27:41.793333 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Dec 13 13:27:41.794841 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Dec 13 13:27:41.799753 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Dec 13 13:27:41.802111 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Dec 13 13:27:41.822558 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:41.826238 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:41.826276 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:41.833093 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:41.854308 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:41.853836 systemd[1]: mnt-oem.mount: Deactivated successfully. Dec 13 13:27:41.865325 systemd[1]: Finished ignition-setup.service - Ignition (setup). Dec 13 13:27:41.873745 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Dec 13 13:27:41.947273 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:41.961310 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:41.991228 systemd-networkd[768]: lo: Link UP Dec 13 13:27:41.991240 systemd-networkd[768]: lo: Gained carrier Dec 13 13:27:41.994847 systemd-networkd[768]: Enumeration completed Dec 13 13:27:41.994984 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:41.996100 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:41.996106 systemd-networkd[768]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:41.996763 systemd[1]: Reached target network.target - Network. Dec 13 13:27:41.997807 systemd-networkd[768]: eth0: Link UP Dec 13 13:27:41.997812 systemd-networkd[768]: eth0: Gained carrier Dec 13 13:27:41.997824 systemd-networkd[768]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:42.029688 ignition[696]: Ignition 2.20.0 Dec 13 13:27:42.029717 ignition[696]: Stage: fetch-offline Dec 13 13:27:42.032161 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:42.029801 ignition[696]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:42.029820 ignition[696]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:42.029990 ignition[696]: parsed url from cmdline: "" Dec 13 13:27:42.029998 ignition[696]: no config URL provided Dec 13 13:27:42.030007 ignition[696]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:42.030022 ignition[696]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:42.030039 ignition[696]: failed to fetch config: resource requires networking Dec 13 13:27:42.030668 ignition[696]: Ignition finished successfully Dec 13 13:27:42.045781 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Dec 13 13:27:42.051654 systemd-networkd[768]: eth0: DHCPv4 address 10.230.57.46/30, gateway 10.230.57.45 acquired from 10.230.57.45 Dec 13 13:27:42.065961 ignition[776]: Ignition 2.20.0 Dec 13 13:27:42.065979 ignition[776]: Stage: fetch Dec 13 13:27:42.066259 ignition[776]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:42.066287 ignition[776]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:42.066437 ignition[776]: parsed url from cmdline: "" Dec 13 13:27:42.066444 ignition[776]: no config URL provided Dec 13 13:27:42.066453 ignition[776]: reading system config file "/usr/lib/ignition/user.ign" Dec 13 13:27:42.066500 ignition[776]: no config at "/usr/lib/ignition/user.ign" Dec 13 13:27:42.068228 ignition[776]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Dec 13 13:27:42.068284 ignition[776]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Dec 13 13:27:42.068426 ignition[776]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Dec 13 13:27:42.086100 ignition[776]: GET result: OK Dec 13 13:27:42.086960 ignition[776]: parsing config with SHA512: 6f554a965053efc34b27a0f517da8e820d57818e27a288eae9341caf6a4cae664b309a1d4d858e366c7b09287a756be7f7730e6985328d824f71e4208f4309db Dec 13 13:27:42.090995 unknown[776]: fetched base config from "system" Dec 13 13:27:42.091806 unknown[776]: fetched base config from "system" Dec 13 13:27:42.092517 unknown[776]: fetched user config from "openstack" Dec 13 13:27:42.092817 ignition[776]: fetch: fetch complete Dec 13 13:27:42.092834 ignition[776]: fetch: fetch passed Dec 13 13:27:42.096323 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Dec 13 13:27:42.092895 ignition[776]: Ignition finished successfully Dec 13 13:27:42.108753 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Dec 13 13:27:42.129317 ignition[784]: Ignition 2.20.0 Dec 13 13:27:42.129342 ignition[784]: Stage: kargs Dec 13 13:27:42.129625 ignition[784]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:42.129646 ignition[784]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:42.133637 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Dec 13 13:27:42.130560 ignition[784]: kargs: kargs passed Dec 13 13:27:42.130630 ignition[784]: Ignition finished successfully Dec 13 13:27:42.140745 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Dec 13 13:27:42.158942 ignition[791]: Ignition 2.20.0 Dec 13 13:27:42.158964 ignition[791]: Stage: disks Dec 13 13:27:42.159172 ignition[791]: no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:42.162418 systemd[1]: Finished ignition-disks.service - Ignition (disks). Dec 13 13:27:42.159191 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:42.160078 ignition[791]: disks: disks passed Dec 13 13:27:42.160144 ignition[791]: Ignition finished successfully Dec 13 13:27:42.165295 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:42.166520 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Dec 13 13:27:42.167954 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:42.168695 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:42.170213 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:42.182758 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Dec 13 13:27:42.202804 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Dec 13 13:27:42.208293 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Dec 13 13:27:42.213647 systemd[1]: Mounting sysroot.mount - /sysroot... Dec 13 13:27:42.331594 kernel: EXT4-fs (vda9): mounted filesystem 8801d4fe-2f40-4e12-9140-c192f2e7d668 r/w with ordered data mode. Quota mode: none. Dec 13 13:27:42.332194 systemd[1]: Mounted sysroot.mount - /sysroot. Dec 13 13:27:42.333639 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:42.340668 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:42.349906 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Dec 13 13:27:42.351969 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Dec 13 13:27:42.355701 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Dec 13 13:27:42.357976 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Dec 13 13:27:42.358688 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:42.374443 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (808) Dec 13 13:27:42.374509 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:42.374531 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:42.374570 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:42.374600 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:42.368732 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Dec 13 13:27:42.393820 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Dec 13 13:27:42.399006 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:42.461569 initrd-setup-root[835]: cut: /sysroot/etc/passwd: No such file or directory Dec 13 13:27:42.471093 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Dec 13 13:27:42.482189 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Dec 13 13:27:42.491268 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Dec 13 13:27:42.602808 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:42.613708 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Dec 13 13:27:42.615763 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Dec 13 13:27:42.628716 kernel: BTRFS info (device vda6): last unmount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:42.658275 ignition[925]: INFO : Ignition 2.20.0 Dec 13 13:27:42.658275 ignition[925]: INFO : Stage: mount Dec 13 13:27:42.660626 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:42.660626 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:42.660626 ignition[925]: INFO : mount: mount passed Dec 13 13:27:42.660626 ignition[925]: INFO : Ignition finished successfully Dec 13 13:27:42.659929 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Dec 13 13:27:42.662745 systemd[1]: Finished ignition-mount.service - Ignition (mount). Dec 13 13:27:42.773611 systemd[1]: sysroot-oem.mount: Deactivated successfully. Dec 13 13:27:43.722824 systemd-networkd[768]: eth0: Gained IPv6LL Dec 13 13:27:45.231531 systemd-networkd[768]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e4b:24:19ff:fee6:392e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e4b:24:19ff:fee6:392e/64 assigned by NDisc. Dec 13 13:27:45.231571 systemd-networkd[768]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 13:27:49.540375 coreos-metadata[810]: Dec 13 13:27:49.539 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:27:49.567030 coreos-metadata[810]: Dec 13 13:27:49.566 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:27:49.584620 coreos-metadata[810]: Dec 13 13:27:49.584 INFO Fetch successful Dec 13 13:27:49.585974 coreos-metadata[810]: Dec 13 13:27:49.585 INFO wrote hostname srv-pz1t9.gb1.brightbox.com to /sysroot/etc/hostname Dec 13 13:27:49.589182 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Dec 13 13:27:49.589468 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Dec 13 13:27:49.599678 systemd[1]: Starting ignition-files.service - Ignition (files)... Dec 13 13:27:49.626843 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Dec 13 13:27:49.653582 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Dec 13 13:27:49.660593 kernel: BTRFS info (device vda6): first mount of filesystem 05186a9a-6409-45c2-9e20-2eaf7a0548f0 Dec 13 13:27:49.660647 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Dec 13 13:27:49.660668 kernel: BTRFS info (device vda6): using free space tree Dec 13 13:27:49.666594 kernel: BTRFS info (device vda6): auto enabling async discard Dec 13 13:27:49.670056 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Dec 13 13:27:49.705745 ignition[960]: INFO : Ignition 2.20.0 Dec 13 13:27:49.705745 ignition[960]: INFO : Stage: files Dec 13 13:27:49.707558 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:49.707558 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:49.707558 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Dec 13 13:27:49.710272 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Dec 13 13:27:49.710272 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Dec 13 13:27:49.712346 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Dec 13 13:27:49.712346 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Dec 13 13:27:49.714238 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Dec 13 13:27:49.712763 unknown[960]: wrote ssh authorized keys file for user: core Dec 13 13:27:49.716194 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:49.716194 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Dec 13 13:27:49.716194 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:49.716194 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Dec 13 13:27:49.716194 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:27:49.716194 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:27:49.716194 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:27:49.716194 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Dec 13 13:27:50.320065 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Dec 13 13:27:51.835282 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Dec 13 13:27:51.837588 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:51.837588 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Dec 13 13:27:51.837588 ignition[960]: INFO : files: files passed Dec 13 13:27:51.837588 ignition[960]: INFO : Ignition finished successfully Dec 13 13:27:51.838501 systemd[1]: Finished ignition-files.service - Ignition (files). Dec 13 13:27:51.853833 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Dec 13 13:27:51.858457 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Dec 13 13:27:51.861252 systemd[1]: ignition-quench.service: Deactivated successfully. Dec 13 13:27:51.862133 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Dec 13 13:27:51.884154 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:51.885486 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:51.887095 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Dec 13 13:27:51.889413 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:51.891783 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Dec 13 13:27:51.904239 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Dec 13 13:27:51.938524 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Dec 13 13:27:51.938774 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Dec 13 13:27:51.940735 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Dec 13 13:27:51.942021 systemd[1]: Reached target initrd.target - Initrd Default Target. Dec 13 13:27:51.943602 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Dec 13 13:27:51.954756 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Dec 13 13:27:51.973221 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:51.979742 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Dec 13 13:27:52.001221 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:52.002217 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:52.003937 systemd[1]: Stopped target timers.target - Timer Units. Dec 13 13:27:52.005360 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Dec 13 13:27:52.005526 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Dec 13 13:27:52.007443 systemd[1]: Stopped target initrd.target - Initrd Default Target. Dec 13 13:27:52.008403 systemd[1]: Stopped target basic.target - Basic System. Dec 13 13:27:52.009910 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Dec 13 13:27:52.011214 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Dec 13 13:27:52.012641 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Dec 13 13:27:52.014145 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Dec 13 13:27:52.015743 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Dec 13 13:27:52.017288 systemd[1]: Stopped target sysinit.target - System Initialization. Dec 13 13:27:52.018788 systemd[1]: Stopped target local-fs.target - Local File Systems. Dec 13 13:27:52.020286 systemd[1]: Stopped target swap.target - Swaps. Dec 13 13:27:52.021647 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Dec 13 13:27:52.021824 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Dec 13 13:27:52.023587 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:52.024493 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:52.025897 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Dec 13 13:27:52.026320 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:52.027561 systemd[1]: dracut-initqueue.service: Deactivated successfully. Dec 13 13:27:52.027765 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Dec 13 13:27:52.029779 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Dec 13 13:27:52.029949 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Dec 13 13:27:52.031597 systemd[1]: ignition-files.service: Deactivated successfully. Dec 13 13:27:52.031788 systemd[1]: Stopped ignition-files.service - Ignition (files). Dec 13 13:27:52.040067 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Dec 13 13:27:52.041482 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Dec 13 13:27:52.041810 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:52.044860 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Dec 13 13:27:52.046934 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Dec 13 13:27:52.047218 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:52.051215 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Dec 13 13:27:52.051459 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Dec 13 13:27:52.059822 systemd[1]: initrd-cleanup.service: Deactivated successfully. Dec 13 13:27:52.059969 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Dec 13 13:27:52.072277 ignition[1013]: INFO : Ignition 2.20.0 Dec 13 13:27:52.073324 ignition[1013]: INFO : Stage: umount Dec 13 13:27:52.075608 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Dec 13 13:27:52.075608 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Dec 13 13:27:52.077760 ignition[1013]: INFO : umount: umount passed Dec 13 13:27:52.077760 ignition[1013]: INFO : Ignition finished successfully Dec 13 13:27:52.080070 systemd[1]: ignition-mount.service: Deactivated successfully. Dec 13 13:27:52.080260 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Dec 13 13:27:52.082566 systemd[1]: sysroot-boot.mount: Deactivated successfully. Dec 13 13:27:52.083203 systemd[1]: ignition-disks.service: Deactivated successfully. Dec 13 13:27:52.083272 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Dec 13 13:27:52.085333 systemd[1]: ignition-kargs.service: Deactivated successfully. Dec 13 13:27:52.085414 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Dec 13 13:27:52.088387 systemd[1]: ignition-fetch.service: Deactivated successfully. Dec 13 13:27:52.088490 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Dec 13 13:27:52.089210 systemd[1]: Stopped target network.target - Network. Dec 13 13:27:52.090637 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Dec 13 13:27:52.090719 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Dec 13 13:27:52.092093 systemd[1]: Stopped target paths.target - Path Units. Dec 13 13:27:52.093417 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Dec 13 13:27:52.093498 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:52.094944 systemd[1]: Stopped target slices.target - Slice Units. Dec 13 13:27:52.096329 systemd[1]: Stopped target sockets.target - Socket Units. Dec 13 13:27:52.097731 systemd[1]: iscsid.socket: Deactivated successfully. Dec 13 13:27:52.097833 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Dec 13 13:27:52.099234 systemd[1]: iscsiuio.socket: Deactivated successfully. Dec 13 13:27:52.099298 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Dec 13 13:27:52.106483 systemd[1]: ignition-setup.service: Deactivated successfully. Dec 13 13:27:52.106582 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Dec 13 13:27:52.107874 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Dec 13 13:27:52.107944 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Dec 13 13:27:52.109671 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Dec 13 13:27:52.111729 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Dec 13 13:27:52.114764 systemd-networkd[768]: eth0: DHCPv6 lease lost Dec 13 13:27:52.117920 systemd[1]: systemd-networkd.service: Deactivated successfully. Dec 13 13:27:52.118118 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Dec 13 13:27:52.120451 systemd[1]: systemd-networkd.socket: Deactivated successfully. Dec 13 13:27:52.120884 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:52.130745 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Dec 13 13:27:52.131520 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Dec 13 13:27:52.132621 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Dec 13 13:27:52.140078 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:52.141381 systemd[1]: systemd-resolved.service: Deactivated successfully. Dec 13 13:27:52.143395 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Dec 13 13:27:52.151111 systemd[1]: systemd-udevd.service: Deactivated successfully. Dec 13 13:27:52.151374 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:52.155465 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Dec 13 13:27:52.155806 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:52.157156 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Dec 13 13:27:52.157231 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:52.158692 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Dec 13 13:27:52.158762 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Dec 13 13:27:52.162768 systemd[1]: dracut-cmdline.service: Deactivated successfully. Dec 13 13:27:52.162850 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Dec 13 13:27:52.164295 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Dec 13 13:27:52.164371 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Dec 13 13:27:52.171813 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Dec 13 13:27:52.173218 systemd[1]: systemd-sysctl.service: Deactivated successfully. Dec 13 13:27:52.173292 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:52.174042 systemd[1]: systemd-modules-load.service: Deactivated successfully. Dec 13 13:27:52.174127 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:52.177477 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Dec 13 13:27:52.177566 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:52.179821 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Dec 13 13:27:52.179888 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:52.183948 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Dec 13 13:27:52.184020 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:52.187532 systemd[1]: sysroot-boot.service: Deactivated successfully. Dec 13 13:27:52.188618 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Dec 13 13:27:52.189786 systemd[1]: network-cleanup.service: Deactivated successfully. Dec 13 13:27:52.189948 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Dec 13 13:27:52.191835 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Dec 13 13:27:52.191983 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Dec 13 13:27:52.194765 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Dec 13 13:27:52.195508 systemd[1]: initrd-setup-root.service: Deactivated successfully. Dec 13 13:27:52.195633 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Dec 13 13:27:52.202788 systemd[1]: Starting initrd-switch-root.service - Switch Root... Dec 13 13:27:52.216034 systemd[1]: Switching root. Dec 13 13:27:52.247947 systemd-journald[202]: Journal stopped Dec 13 13:27:53.761315 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Dec 13 13:27:53.761496 kernel: SELinux: policy capability network_peer_controls=1 Dec 13 13:27:53.761569 kernel: SELinux: policy capability open_perms=1 Dec 13 13:27:53.761607 kernel: SELinux: policy capability extended_socket_class=1 Dec 13 13:27:53.761639 kernel: SELinux: policy capability always_check_network=0 Dec 13 13:27:53.761670 kernel: SELinux: policy capability cgroup_seclabel=1 Dec 13 13:27:53.761721 kernel: SELinux: policy capability nnp_nosuid_transition=1 Dec 13 13:27:53.761746 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Dec 13 13:27:53.761782 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Dec 13 13:27:53.761812 kernel: audit: type=1403 audit(1734096472.480:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Dec 13 13:27:53.761865 systemd[1]: Successfully loaded SELinux policy in 51.212ms. Dec 13 13:27:53.761928 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 23.379ms. Dec 13 13:27:53.761953 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Dec 13 13:27:53.761975 systemd[1]: Detected virtualization kvm. Dec 13 13:27:53.762002 systemd[1]: Detected architecture x86-64. Dec 13 13:27:53.762029 systemd[1]: Detected first boot. Dec 13 13:27:53.762057 systemd[1]: Hostname set to . Dec 13 13:27:53.762090 systemd[1]: Initializing machine ID from VM UUID. Dec 13 13:27:53.762124 zram_generator::config[1055]: No configuration found. Dec 13 13:27:53.762155 systemd[1]: Populated /etc with preset unit settings. Dec 13 13:27:53.762178 systemd[1]: initrd-switch-root.service: Deactivated successfully. Dec 13 13:27:53.762199 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Dec 13 13:27:53.762226 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Dec 13 13:27:53.762250 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Dec 13 13:27:53.762288 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Dec 13 13:27:53.762317 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Dec 13 13:27:53.762340 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Dec 13 13:27:53.762368 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Dec 13 13:27:53.762390 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Dec 13 13:27:53.762417 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Dec 13 13:27:53.762439 systemd[1]: Created slice user.slice - User and Session Slice. Dec 13 13:27:53.762460 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Dec 13 13:27:53.762482 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Dec 13 13:27:53.762516 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Dec 13 13:27:53.762562 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Dec 13 13:27:53.762600 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Dec 13 13:27:53.762635 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Dec 13 13:27:53.762658 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Dec 13 13:27:53.762686 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Dec 13 13:27:53.762706 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Dec 13 13:27:53.762733 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Dec 13 13:27:53.762767 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Dec 13 13:27:53.762787 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Dec 13 13:27:53.762806 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Dec 13 13:27:53.762825 systemd[1]: Reached target remote-fs.target - Remote File Systems. Dec 13 13:27:53.762856 systemd[1]: Reached target slices.target - Slice Units. Dec 13 13:27:53.762882 systemd[1]: Reached target swap.target - Swaps. Dec 13 13:27:53.762916 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Dec 13 13:27:53.762942 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Dec 13 13:27:53.762977 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Dec 13 13:27:53.762998 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Dec 13 13:27:53.763019 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Dec 13 13:27:53.763046 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Dec 13 13:27:53.763074 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Dec 13 13:27:53.763096 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Dec 13 13:27:53.763127 systemd[1]: Mounting media.mount - External Media Directory... Dec 13 13:27:53.763151 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:53.763179 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Dec 13 13:27:53.763201 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Dec 13 13:27:53.763222 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Dec 13 13:27:53.763244 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Dec 13 13:27:53.763272 systemd[1]: Reached target machines.target - Containers. Dec 13 13:27:53.763295 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Dec 13 13:27:53.763316 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:53.763337 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Dec 13 13:27:53.763370 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Dec 13 13:27:53.763393 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:53.763415 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:53.763436 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:53.763466 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Dec 13 13:27:53.763492 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:53.763519 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Dec 13 13:27:53.763577 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Dec 13 13:27:53.763605 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Dec 13 13:27:53.763635 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Dec 13 13:27:53.763664 systemd[1]: Stopped systemd-fsck-usr.service. Dec 13 13:27:53.763686 systemd[1]: Starting systemd-journald.service - Journal Service... Dec 13 13:27:53.763707 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Dec 13 13:27:53.763727 kernel: loop: module loaded Dec 13 13:27:53.763748 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Dec 13 13:27:53.763777 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Dec 13 13:27:53.763799 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Dec 13 13:27:53.763828 systemd[1]: verity-setup.service: Deactivated successfully. Dec 13 13:27:53.763850 systemd[1]: Stopped verity-setup.service. Dec 13 13:27:53.763871 kernel: fuse: init (API version 7.39) Dec 13 13:27:53.763892 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:53.763925 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Dec 13 13:27:53.763946 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Dec 13 13:27:53.764000 systemd[1]: Mounted media.mount - External Media Directory. Dec 13 13:27:53.764023 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Dec 13 13:27:53.764044 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Dec 13 13:27:53.764065 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Dec 13 13:27:53.764086 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Dec 13 13:27:53.764122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Dec 13 13:27:53.764147 systemd[1]: modprobe@configfs.service: Deactivated successfully. Dec 13 13:27:53.764169 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Dec 13 13:27:53.764190 kernel: ACPI: bus type drm_connector registered Dec 13 13:27:53.764211 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:53.764261 systemd-journald[1151]: Collecting audit messages is disabled. Dec 13 13:27:53.764299 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:53.764329 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:53.764362 systemd-journald[1151]: Journal started Dec 13 13:27:53.764418 systemd-journald[1151]: Runtime Journal (/run/log/journal/6fce2a4ef47e4048b364dcdb48b7ef9a) is 4.7M, max 37.9M, 33.2M free. Dec 13 13:27:53.337032 systemd[1]: Queued start job for default target multi-user.target. Dec 13 13:27:53.371054 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Dec 13 13:27:53.371891 systemd[1]: systemd-journald.service: Deactivated successfully. Dec 13 13:27:53.768581 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:53.777587 systemd[1]: Started systemd-journald.service - Journal Service. Dec 13 13:27:53.778796 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:53.779047 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:53.780239 systemd[1]: modprobe@fuse.service: Deactivated successfully. Dec 13 13:27:53.780464 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Dec 13 13:27:53.781636 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:53.781859 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:53.782968 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Dec 13 13:27:53.784252 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Dec 13 13:27:53.785377 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Dec 13 13:27:53.800916 systemd[1]: Reached target network-pre.target - Preparation for Network. Dec 13 13:27:53.812649 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Dec 13 13:27:53.822230 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Dec 13 13:27:53.823166 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Dec 13 13:27:53.823320 systemd[1]: Reached target local-fs.target - Local File Systems. Dec 13 13:27:53.825530 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Dec 13 13:27:53.829697 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Dec 13 13:27:53.836727 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Dec 13 13:27:53.838209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:53.843774 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Dec 13 13:27:53.848005 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Dec 13 13:27:53.850007 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:53.852756 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Dec 13 13:27:53.853617 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:53.860755 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Dec 13 13:27:53.871721 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Dec 13 13:27:53.881197 systemd[1]: Starting systemd-sysusers.service - Create System Users... Dec 13 13:27:53.887004 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Dec 13 13:27:53.896823 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Dec 13 13:27:53.898227 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Dec 13 13:27:53.937235 kernel: loop0: detected capacity change from 0 to 210664 Dec 13 13:27:53.951896 systemd-journald[1151]: Time spent on flushing to /var/log/journal/6fce2a4ef47e4048b364dcdb48b7ef9a is 55.149ms for 1123 entries. Dec 13 13:27:53.951896 systemd-journald[1151]: System Journal (/var/log/journal/6fce2a4ef47e4048b364dcdb48b7ef9a) is 8.0M, max 584.8M, 576.8M free. Dec 13 13:27:54.035835 systemd-journald[1151]: Received client request to flush runtime journal. Dec 13 13:27:54.035907 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Dec 13 13:27:53.965890 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Dec 13 13:27:53.966954 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Dec 13 13:27:53.983790 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Dec 13 13:27:54.003381 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Dec 13 13:27:54.041910 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Dec 13 13:27:54.044800 kernel: loop1: detected capacity change from 0 to 141000 Dec 13 13:27:54.071284 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Dec 13 13:27:54.074620 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Dec 13 13:27:54.084243 systemd[1]: Finished systemd-sysusers.service - Create System Users. Dec 13 13:27:54.102850 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Dec 13 13:27:54.146599 kernel: loop2: detected capacity change from 0 to 138184 Dec 13 13:27:54.195586 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Dec 13 13:27:54.203736 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Dec 13 13:27:54.218382 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Dec 13 13:27:54.218419 systemd-tmpfiles[1208]: ACLs are not supported, ignoring. Dec 13 13:27:54.243740 kernel: loop3: detected capacity change from 0 to 8 Dec 13 13:27:54.244726 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Dec 13 13:27:54.260309 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Dec 13 13:27:54.278590 kernel: loop4: detected capacity change from 0 to 210664 Dec 13 13:27:54.312593 kernel: loop5: detected capacity change from 0 to 141000 Dec 13 13:27:54.351360 kernel: loop6: detected capacity change from 0 to 138184 Dec 13 13:27:54.379977 kernel: loop7: detected capacity change from 0 to 8 Dec 13 13:27:54.384572 (sd-merge)[1215]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Dec 13 13:27:54.385412 (sd-merge)[1215]: Merged extensions into '/usr'. Dec 13 13:27:54.399131 systemd[1]: Reloading requested from client PID 1188 ('systemd-sysext') (unit systemd-sysext.service)... Dec 13 13:27:54.399155 systemd[1]: Reloading... Dec 13 13:27:54.556910 zram_generator::config[1241]: No configuration found. Dec 13 13:27:54.688793 ldconfig[1183]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Dec 13 13:27:54.762881 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:54.835933 systemd[1]: Reloading finished in 436 ms. Dec 13 13:27:54.873734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Dec 13 13:27:54.878413 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Dec 13 13:27:54.889951 systemd[1]: Starting ensure-sysext.service... Dec 13 13:27:54.895681 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Dec 13 13:27:54.907212 systemd[1]: Reloading requested from client PID 1297 ('systemctl') (unit ensure-sysext.service)... Dec 13 13:27:54.907395 systemd[1]: Reloading... Dec 13 13:27:54.987838 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Dec 13 13:27:54.988333 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Dec 13 13:27:54.989902 systemd-tmpfiles[1298]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Dec 13 13:27:54.990371 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Dec 13 13:27:54.990487 systemd-tmpfiles[1298]: ACLs are not supported, ignoring. Dec 13 13:27:55.000420 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:55.000450 systemd-tmpfiles[1298]: Skipping /boot Dec 13 13:27:55.006571 zram_generator::config[1324]: No configuration found. Dec 13 13:27:55.044943 systemd-tmpfiles[1298]: Detected autofs mount point /boot during canonicalization of boot. Dec 13 13:27:55.044963 systemd-tmpfiles[1298]: Skipping /boot Dec 13 13:27:55.222802 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:27:55.295828 systemd[1]: Reloading finished in 387 ms. Dec 13 13:27:55.324120 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Dec 13 13:27:55.329193 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Dec 13 13:27:55.342789 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:27:55.350754 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Dec 13 13:27:55.362545 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Dec 13 13:27:55.370743 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Dec 13 13:27:55.373716 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Dec 13 13:27:55.382793 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Dec 13 13:27:55.393050 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:55.393344 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:55.404158 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:55.419779 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:55.429453 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:55.430859 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:55.431041 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:55.455847 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Dec 13 13:27:55.459012 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Dec 13 13:27:55.459418 systemd-udevd[1395]: Using default interface naming scheme 'v255'. Dec 13 13:27:55.461338 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:55.462635 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:55.464347 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:55.465445 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:55.486032 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:55.486303 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:55.490929 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:55.491367 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:55.502448 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Dec 13 13:27:55.508832 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Dec 13 13:27:55.512787 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:55.522932 systemd[1]: Starting systemd-update-done.service - Update is Completed... Dec 13 13:27:55.524083 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:55.526416 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Dec 13 13:27:55.528025 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Dec 13 13:27:55.531378 augenrules[1418]: No rules Dec 13 13:27:55.538669 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:27:55.538914 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:27:55.541748 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Dec 13 13:27:55.541992 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Dec 13 13:27:55.548007 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:55.548356 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Dec 13 13:27:55.553812 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Dec 13 13:27:55.557765 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Dec 13 13:27:55.559766 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Dec 13 13:27:55.565770 systemd[1]: Starting systemd-networkd.service - Network Configuration... Dec 13 13:27:55.566559 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Dec 13 13:27:55.567626 systemd[1]: Finished ensure-sysext.service. Dec 13 13:27:55.569222 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Dec 13 13:27:55.575672 systemd[1]: Finished systemd-update-done.service - Update is Completed. Dec 13 13:27:55.592497 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Dec 13 13:27:55.593307 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Dec 13 13:27:55.599824 systemd[1]: Started systemd-userdbd.service - User Database Manager. Dec 13 13:27:55.603903 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Dec 13 13:27:55.604170 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Dec 13 13:27:55.608284 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Dec 13 13:27:55.632315 systemd[1]: modprobe@drm.service: Deactivated successfully. Dec 13 13:27:55.632649 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Dec 13 13:27:55.635234 systemd[1]: modprobe@loop.service: Deactivated successfully. Dec 13 13:27:55.635458 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Dec 13 13:27:55.638924 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Dec 13 13:27:55.766462 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Dec 13 13:27:55.767526 systemd[1]: Reached target time-set.target - System Time Set. Dec 13 13:27:55.770309 systemd-resolved[1392]: Positive Trust Anchors: Dec 13 13:27:55.772218 systemd-resolved[1392]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Dec 13 13:27:55.772437 systemd-resolved[1392]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Dec 13 13:27:55.780363 systemd-networkd[1432]: lo: Link UP Dec 13 13:27:55.780375 systemd-networkd[1432]: lo: Gained carrier Dec 13 13:27:55.784543 systemd-networkd[1432]: Enumeration completed Dec 13 13:27:55.784578 systemd-timesyncd[1439]: No network connectivity, watching for changes. Dec 13 13:27:55.787721 systemd[1]: Started systemd-networkd.service - Network Configuration. Dec 13 13:27:55.790212 systemd-resolved[1392]: Using system hostname 'srv-pz1t9.gb1.brightbox.com'. Dec 13 13:27:55.802604 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1433) Dec 13 13:27:55.800711 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Dec 13 13:27:55.801568 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Dec 13 13:27:55.802512 systemd[1]: Reached target network.target - Network. Dec 13 13:27:55.803228 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Dec 13 13:27:55.811575 kernel: BTRFS info: devid 1 device path /dev/dm-0 changed to /dev/mapper/usr scanned by (udev-worker) (1433) Dec 13 13:27:55.838173 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Dec 13 13:27:55.882572 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1426) Dec 13 13:27:55.939606 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:55.939629 systemd-networkd[1432]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Dec 13 13:27:55.943182 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Dec 13 13:27:55.944177 systemd-networkd[1432]: eth0: Link UP Dec 13 13:27:55.944300 systemd-networkd[1432]: eth0: Gained carrier Dec 13 13:27:55.944413 systemd-networkd[1432]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Dec 13 13:27:55.955564 kernel: ACPI: button: Power Button [PWRF] Dec 13 13:27:55.959626 systemd-networkd[1432]: eth0: DHCPv4 address 10.230.57.46/30, gateway 10.230.57.45 acquired from 10.230.57.45 Dec 13 13:27:55.961718 systemd-timesyncd[1439]: Network configuration changed, trying to establish connection. Dec 13 13:27:55.986621 kernel: mousedev: PS/2 mouse device common for all mice Dec 13 13:27:56.002699 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Dec 13 13:27:56.009830 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Dec 13 13:27:56.045654 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Dec 13 13:27:56.049187 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Dec 13 13:27:56.057241 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Dec 13 13:27:56.057595 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Dec 13 13:27:56.047372 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Dec 13 13:27:56.132696 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Dec 13 13:27:57.190989 systemd-timesyncd[1439]: Contacted time server 212.82.85.226:123 (3.flatcar.pool.ntp.org). Dec 13 13:27:57.191184 systemd-timesyncd[1439]: Initial clock synchronization to Fri 2024-12-13 13:27:57.190679 UTC. Dec 13 13:27:57.191405 systemd-resolved[1392]: Clock change detected. Flushing caches. Dec 13 13:27:57.205610 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Dec 13 13:27:57.212396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Dec 13 13:27:57.219673 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Dec 13 13:27:57.250425 lvm[1480]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:57.281885 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Dec 13 13:27:57.283655 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Dec 13 13:27:57.284535 systemd[1]: Reached target sysinit.target - System Initialization. Dec 13 13:27:57.285563 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Dec 13 13:27:57.286447 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Dec 13 13:27:57.287590 systemd[1]: Started logrotate.timer - Daily rotation of log files. Dec 13 13:27:57.288469 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Dec 13 13:27:57.289291 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Dec 13 13:27:57.290077 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Dec 13 13:27:57.290124 systemd[1]: Reached target paths.target - Path Units. Dec 13 13:27:57.290767 systemd[1]: Reached target timers.target - Timer Units. Dec 13 13:27:57.293247 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Dec 13 13:27:57.296118 systemd[1]: Starting docker.socket - Docker Socket for the API... Dec 13 13:27:57.301716 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Dec 13 13:27:57.304476 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Dec 13 13:27:57.306101 systemd[1]: Listening on docker.socket - Docker Socket for the API. Dec 13 13:27:57.306967 systemd[1]: Reached target sockets.target - Socket Units. Dec 13 13:27:57.307641 systemd[1]: Reached target basic.target - Basic System. Dec 13 13:27:57.308332 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:57.308405 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Dec 13 13:27:57.310533 systemd[1]: Starting containerd.service - containerd container runtime... Dec 13 13:27:57.319194 lvm[1484]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Dec 13 13:27:57.319606 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Dec 13 13:27:57.325455 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Dec 13 13:27:57.331550 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Dec 13 13:27:57.335581 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Dec 13 13:27:57.336391 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Dec 13 13:27:57.346602 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Dec 13 13:27:57.350142 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Dec 13 13:27:57.356575 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Dec 13 13:27:57.368599 systemd[1]: Starting systemd-logind.service - User Login Management... Dec 13 13:27:57.370180 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Dec 13 13:27:57.371067 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Dec 13 13:27:57.382634 systemd[1]: Starting update-engine.service - Update Engine... Dec 13 13:27:57.395600 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Dec 13 13:27:57.399441 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Dec 13 13:27:57.417835 systemd[1]: motdgen.service: Deactivated successfully. Dec 13 13:27:57.419575 jq[1488]: false Dec 13 13:27:57.419115 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Dec 13 13:27:57.420466 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Dec 13 13:27:57.420692 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Dec 13 13:27:57.444066 dbus-daemon[1487]: [system] SELinux support is enabled Dec 13 13:27:57.452490 dbus-daemon[1487]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1432 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Dec 13 13:27:57.453503 systemd[1]: Started dbus.service - D-Bus System Message Bus. Dec 13 13:27:57.453795 jq[1497]: true Dec 13 13:27:57.455885 (ntainerd)[1511]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Dec 13 13:27:57.459150 dbus-daemon[1487]: [system] Successfully activated service 'org.freedesktop.systemd1' Dec 13 13:27:57.459446 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Dec 13 13:27:57.459491 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Dec 13 13:27:57.461542 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Dec 13 13:27:57.461587 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Dec 13 13:27:57.463063 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Dec 13 13:27:57.464471 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Dec 13 13:27:57.491589 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Dec 13 13:27:57.510444 update_engine[1496]: I20241213 13:27:57.509191 1496 main.cc:92] Flatcar Update Engine starting Dec 13 13:27:57.510799 extend-filesystems[1489]: Found loop4 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found loop5 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found loop6 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found loop7 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found vda Dec 13 13:27:57.510799 extend-filesystems[1489]: Found vda1 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found vda2 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found vda3 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found usr Dec 13 13:27:57.510799 extend-filesystems[1489]: Found vda4 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found vda6 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found vda7 Dec 13 13:27:57.510799 extend-filesystems[1489]: Found vda9 Dec 13 13:27:57.510799 extend-filesystems[1489]: Checking size of /dev/vda9 Dec 13 13:27:57.587179 jq[1515]: true Dec 13 13:27:57.527486 systemd[1]: Started update-engine.service - Update Engine. Dec 13 13:27:57.587438 update_engine[1496]: I20241213 13:27:57.535358 1496 update_check_scheduler.cc:74] Next update check in 6m30s Dec 13 13:27:57.539600 systemd[1]: Started locksmithd.service - Cluster reboot manager. Dec 13 13:27:57.589636 extend-filesystems[1489]: Resized partition /dev/vda9 Dec 13 13:27:57.601531 extend-filesystems[1526]: resize2fs 1.47.1 (20-May-2024) Dec 13 13:27:57.615406 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Dec 13 13:27:57.628751 systemd-logind[1495]: Watching system buttons on /dev/input/event2 (Power Button) Dec 13 13:27:57.629931 systemd-logind[1495]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Dec 13 13:27:57.630349 systemd-logind[1495]: New seat seat0. Dec 13 13:27:57.632106 systemd[1]: Started systemd-logind.service - User Login Management. Dec 13 13:27:57.743637 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (1457) Dec 13 13:27:57.804260 bash[1541]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:27:57.807311 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Dec 13 13:27:57.819874 systemd[1]: Starting sshkeys.service... Dec 13 13:27:57.887216 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Dec 13 13:27:57.896168 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Dec 13 13:27:57.930997 locksmithd[1521]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Dec 13 13:27:57.961973 dbus-daemon[1487]: [system] Successfully activated service 'org.freedesktop.hostname1' Dec 13 13:27:57.963483 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Dec 13 13:27:57.963611 dbus-daemon[1487]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1518 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Dec 13 13:27:57.981292 systemd[1]: Starting polkit.service - Authorization Manager... Dec 13 13:27:58.003460 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Dec 13 13:27:58.031130 extend-filesystems[1526]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Dec 13 13:27:58.031130 extend-filesystems[1526]: old_desc_blocks = 1, new_desc_blocks = 8 Dec 13 13:27:58.031130 extend-filesystems[1526]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Dec 13 13:27:58.041618 extend-filesystems[1489]: Resized filesystem in /dev/vda9 Dec 13 13:27:58.038951 polkitd[1556]: Started polkitd version 121 Dec 13 13:27:58.034098 systemd[1]: extend-filesystems.service: Deactivated successfully. Dec 13 13:27:58.043949 containerd[1511]: time="2024-12-13T13:27:58.041813689Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Dec 13 13:27:58.035444 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Dec 13 13:27:58.053687 polkitd[1556]: Loading rules from directory /etc/polkit-1/rules.d Dec 13 13:27:58.053783 polkitd[1556]: Loading rules from directory /usr/share/polkit-1/rules.d Dec 13 13:27:58.054869 polkitd[1556]: Finished loading, compiling and executing 2 rules Dec 13 13:27:58.057809 dbus-daemon[1487]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Dec 13 13:27:58.058026 systemd[1]: Started polkit.service - Authorization Manager. Dec 13 13:27:58.059717 polkitd[1556]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Dec 13 13:27:58.082011 systemd-hostnamed[1518]: Hostname set to (static) Dec 13 13:27:58.083277 containerd[1511]: time="2024-12-13T13:27:58.083225287Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:58.086981 containerd[1511]: time="2024-12-13T13:27:58.086939700Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.65-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:58.087112 containerd[1511]: time="2024-12-13T13:27:58.087086046Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Dec 13 13:27:58.087223 containerd[1511]: time="2024-12-13T13:27:58.087187096Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Dec 13 13:27:58.087613 containerd[1511]: time="2024-12-13T13:27:58.087584215Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Dec 13 13:27:58.088425 containerd[1511]: time="2024-12-13T13:27:58.087791787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:58.088425 containerd[1511]: time="2024-12-13T13:27:58.087921408Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:58.088425 containerd[1511]: time="2024-12-13T13:27:58.087945288Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:58.088425 containerd[1511]: time="2024-12-13T13:27:58.088152857Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:58.088425 containerd[1511]: time="2024-12-13T13:27:58.088178416Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:58.088425 containerd[1511]: time="2024-12-13T13:27:58.088198457Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:58.088425 containerd[1511]: time="2024-12-13T13:27:58.088214013Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:58.088425 containerd[1511]: time="2024-12-13T13:27:58.088344905Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:58.089347 containerd[1511]: time="2024-12-13T13:27:58.089301211Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Dec 13 13:27:58.089669 containerd[1511]: time="2024-12-13T13:27:58.089623633Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Dec 13 13:27:58.089862 containerd[1511]: time="2024-12-13T13:27:58.089760186Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Dec 13 13:27:58.090172 containerd[1511]: time="2024-12-13T13:27:58.090069701Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Dec 13 13:27:58.090342 containerd[1511]: time="2024-12-13T13:27:58.090275170Z" level=info msg="metadata content store policy set" policy=shared Dec 13 13:27:58.094132 containerd[1511]: time="2024-12-13T13:27:58.094060800Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Dec 13 13:27:58.094648 containerd[1511]: time="2024-12-13T13:27:58.094284190Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Dec 13 13:27:58.094648 containerd[1511]: time="2024-12-13T13:27:58.094329066Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Dec 13 13:27:58.094648 containerd[1511]: time="2024-12-13T13:27:58.094351282Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Dec 13 13:27:58.094648 containerd[1511]: time="2024-12-13T13:27:58.094381553Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Dec 13 13:27:58.094648 containerd[1511]: time="2024-12-13T13:27:58.094598353Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Dec 13 13:27:58.095206 containerd[1511]: time="2024-12-13T13:27:58.095179391Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Dec 13 13:27:58.095511 containerd[1511]: time="2024-12-13T13:27:58.095470187Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095589855Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095619460Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095641980Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095660357Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095681950Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095700477Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095739946Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095766556Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095814411Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095861039Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095896160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095916382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095933992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096265 containerd[1511]: time="2024-12-13T13:27:58.095952780Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.095970948Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.095997484Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096016110Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096040504Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096078357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096104253Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096124365Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096142532Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096159472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096193087Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Dec 13 13:27:58.096800 containerd[1511]: time="2024-12-13T13:27:58.096221955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.098288 containerd[1511]: time="2024-12-13T13:27:58.097208172Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.098288 containerd[1511]: time="2024-12-13T13:27:58.097240641Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Dec 13 13:27:58.098483 containerd[1511]: time="2024-12-13T13:27:58.098458012Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Dec 13 13:27:58.098674 containerd[1511]: time="2024-12-13T13:27:58.098647058Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Dec 13 13:27:58.098757 containerd[1511]: time="2024-12-13T13:27:58.098737050Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Dec 13 13:27:58.098895 containerd[1511]: time="2024-12-13T13:27:58.098868370Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Dec 13 13:27:58.098983 containerd[1511]: time="2024-12-13T13:27:58.098961449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.099101 containerd[1511]: time="2024-12-13T13:27:58.099075529Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Dec 13 13:27:58.099210 containerd[1511]: time="2024-12-13T13:27:58.099187701Z" level=info msg="NRI interface is disabled by configuration." Dec 13 13:27:58.099326 containerd[1511]: time="2024-12-13T13:27:58.099303921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Dec 13 13:27:58.099942 containerd[1511]: time="2024-12-13T13:27:58.099868814Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Dec 13 13:27:58.101319 containerd[1511]: time="2024-12-13T13:27:58.100322609Z" level=info msg="Connect containerd service" Dec 13 13:27:58.101319 containerd[1511]: time="2024-12-13T13:27:58.100412061Z" level=info msg="using legacy CRI server" Dec 13 13:27:58.101319 containerd[1511]: time="2024-12-13T13:27:58.100428221Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Dec 13 13:27:58.101319 containerd[1511]: time="2024-12-13T13:27:58.100655258Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Dec 13 13:27:58.101931 containerd[1511]: time="2024-12-13T13:27:58.101899163Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:27:58.102180 containerd[1511]: time="2024-12-13T13:27:58.102129752Z" level=info msg="Start subscribing containerd event" Dec 13 13:27:58.102295 containerd[1511]: time="2024-12-13T13:27:58.102271163Z" level=info msg="Start recovering state" Dec 13 13:27:58.102530 containerd[1511]: time="2024-12-13T13:27:58.102507361Z" level=info msg="Start event monitor" Dec 13 13:27:58.102634 containerd[1511]: time="2024-12-13T13:27:58.102612922Z" level=info msg="Start snapshots syncer" Dec 13 13:27:58.102724 containerd[1511]: time="2024-12-13T13:27:58.102701663Z" level=info msg="Start cni network conf syncer for default" Dec 13 13:27:58.102855 containerd[1511]: time="2024-12-13T13:27:58.102806327Z" level=info msg="Start streaming server" Dec 13 13:27:58.104998 containerd[1511]: time="2024-12-13T13:27:58.103709672Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Dec 13 13:27:58.104998 containerd[1511]: time="2024-12-13T13:27:58.103797889Z" level=info msg=serving... address=/run/containerd/containerd.sock Dec 13 13:27:58.103990 systemd[1]: Started containerd.service - containerd container runtime. Dec 13 13:27:58.107063 containerd[1511]: time="2024-12-13T13:27:58.106950294Z" level=info msg="containerd successfully booted in 0.070014s" Dec 13 13:27:58.167100 systemd-networkd[1432]: eth0: Gained IPv6LL Dec 13 13:27:58.170269 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Dec 13 13:27:58.173235 systemd[1]: Reached target network-online.target - Network is Online. Dec 13 13:27:58.185688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:27:58.194512 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Dec 13 13:27:58.224945 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Dec 13 13:27:58.364197 sshd_keygen[1508]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Dec 13 13:27:58.395813 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Dec 13 13:27:58.408712 systemd[1]: Starting issuegen.service - Generate /run/issue... Dec 13 13:27:58.417190 systemd[1]: issuegen.service: Deactivated successfully. Dec 13 13:27:58.417545 systemd[1]: Finished issuegen.service - Generate /run/issue. Dec 13 13:27:58.425903 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Dec 13 13:27:58.440597 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Dec 13 13:27:58.451039 systemd[1]: Started getty@tty1.service - Getty on tty1. Dec 13 13:27:58.458912 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Dec 13 13:27:58.461674 systemd[1]: Reached target getty.target - Login Prompts. Dec 13 13:27:58.838611 systemd-networkd[1432]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8e4b:24:19ff:fee6:392e/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8e4b:24:19ff:fee6:392e/64 assigned by NDisc. Dec 13 13:27:58.838873 systemd-networkd[1432]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Dec 13 13:27:59.101971 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:27:59.108118 (kubelet)[1604]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:27:59.778969 kubelet[1604]: E1213 13:27:59.778719 1604 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:27:59.782980 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:27:59.783291 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:27:59.784049 systemd[1]: kubelet.service: Consumed 1.072s CPU time. Dec 13 13:28:00.680186 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Dec 13 13:28:00.689866 systemd[1]: Started sshd@0-10.230.57.46:22-139.178.68.195:59516.service - OpenSSH per-connection server daemon (139.178.68.195:59516). Dec 13 13:28:01.607182 sshd[1615]: Accepted publickey for core from 139.178.68.195 port 59516 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:01.610236 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:01.626566 systemd-logind[1495]: New session 1 of user core. Dec 13 13:28:01.629697 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Dec 13 13:28:01.636851 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Dec 13 13:28:01.666454 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Dec 13 13:28:01.680190 systemd[1]: Starting user@500.service - User Manager for UID 500... Dec 13 13:28:01.686200 (systemd)[1620]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Dec 13 13:28:01.827604 systemd[1620]: Queued start job for default target default.target. Dec 13 13:28:01.836218 systemd[1620]: Created slice app.slice - User Application Slice. Dec 13 13:28:01.836264 systemd[1620]: Reached target paths.target - Paths. Dec 13 13:28:01.836288 systemd[1620]: Reached target timers.target - Timers. Dec 13 13:28:01.838397 systemd[1620]: Starting dbus.socket - D-Bus User Message Bus Socket... Dec 13 13:28:01.869030 systemd[1620]: Listening on dbus.socket - D-Bus User Message Bus Socket. Dec 13 13:28:01.869230 systemd[1620]: Reached target sockets.target - Sockets. Dec 13 13:28:01.869257 systemd[1620]: Reached target basic.target - Basic System. Dec 13 13:28:01.869334 systemd[1620]: Reached target default.target - Main User Target. Dec 13 13:28:01.869422 systemd[1620]: Startup finished in 173ms. Dec 13 13:28:01.869446 systemd[1]: Started user@500.service - User Manager for UID 500. Dec 13 13:28:01.878737 systemd[1]: Started session-1.scope - Session 1 of User core. Dec 13 13:28:02.603434 systemd[1]: Started sshd@1-10.230.57.46:22-139.178.68.195:59520.service - OpenSSH per-connection server daemon (139.178.68.195:59520). Dec 13 13:28:03.499067 agetty[1596]: failed to open credentials directory Dec 13 13:28:03.499190 agetty[1597]: failed to open credentials directory Dec 13 13:28:03.511744 sshd[1631]: Accepted publickey for core from 139.178.68.195 port 59520 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:03.516199 sshd-session[1631]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:03.519147 login[1596]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:28:03.521757 login[1597]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Dec 13 13:28:03.525601 systemd-logind[1495]: New session 3 of user core. Dec 13 13:28:03.534679 systemd[1]: Started session-3.scope - Session 3 of User core. Dec 13 13:28:03.540109 systemd-logind[1495]: New session 4 of user core. Dec 13 13:28:03.548654 systemd[1]: Started session-4.scope - Session 4 of User core. Dec 13 13:28:03.553035 systemd-logind[1495]: New session 2 of user core. Dec 13 13:28:03.559865 systemd[1]: Started session-2.scope - Session 2 of User core. Dec 13 13:28:04.133465 sshd[1637]: Connection closed by 139.178.68.195 port 59520 Dec 13 13:28:04.134304 sshd-session[1631]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:04.137913 systemd[1]: sshd@1-10.230.57.46:22-139.178.68.195:59520.service: Deactivated successfully. Dec 13 13:28:04.140561 systemd[1]: session-3.scope: Deactivated successfully. Dec 13 13:28:04.142549 systemd-logind[1495]: Session 3 logged out. Waiting for processes to exit. Dec 13 13:28:04.144156 systemd-logind[1495]: Removed session 3. Dec 13 13:28:04.292829 systemd[1]: Started sshd@2-10.230.57.46:22-139.178.68.195:59526.service - OpenSSH per-connection server daemon (139.178.68.195:59526). Dec 13 13:28:04.438223 coreos-metadata[1486]: Dec 13 13:28:04.437 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:28:04.464461 coreos-metadata[1486]: Dec 13 13:28:04.464 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Dec 13 13:28:04.470001 coreos-metadata[1486]: Dec 13 13:28:04.469 INFO Fetch failed with 404: resource not found Dec 13 13:28:04.470089 coreos-metadata[1486]: Dec 13 13:28:04.470 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Dec 13 13:28:04.471000 coreos-metadata[1486]: Dec 13 13:28:04.470 INFO Fetch successful Dec 13 13:28:04.471145 coreos-metadata[1486]: Dec 13 13:28:04.471 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Dec 13 13:28:04.483967 coreos-metadata[1486]: Dec 13 13:28:04.483 INFO Fetch successful Dec 13 13:28:04.484083 coreos-metadata[1486]: Dec 13 13:28:04.484 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Dec 13 13:28:04.502740 coreos-metadata[1486]: Dec 13 13:28:04.502 INFO Fetch successful Dec 13 13:28:04.502847 coreos-metadata[1486]: Dec 13 13:28:04.502 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Dec 13 13:28:04.520406 coreos-metadata[1486]: Dec 13 13:28:04.520 INFO Fetch successful Dec 13 13:28:04.520556 coreos-metadata[1486]: Dec 13 13:28:04.520 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Dec 13 13:28:04.544337 coreos-metadata[1486]: Dec 13 13:28:04.544 INFO Fetch successful Dec 13 13:28:04.573652 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Dec 13 13:28:04.574960 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Dec 13 13:28:05.001024 coreos-metadata[1552]: Dec 13 13:28:05.000 WARN failed to locate config-drive, using the metadata service API instead Dec 13 13:28:05.023525 coreos-metadata[1552]: Dec 13 13:28:05.023 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Dec 13 13:28:05.050939 coreos-metadata[1552]: Dec 13 13:28:05.050 INFO Fetch successful Dec 13 13:28:05.051093 coreos-metadata[1552]: Dec 13 13:28:05.051 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Dec 13 13:28:05.086521 coreos-metadata[1552]: Dec 13 13:28:05.086 INFO Fetch successful Dec 13 13:28:05.088675 unknown[1552]: wrote ssh authorized keys file for user: core Dec 13 13:28:05.108423 update-ssh-keys[1675]: Updated "/home/core/.ssh/authorized_keys" Dec 13 13:28:05.108926 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Dec 13 13:28:05.111152 systemd[1]: Finished sshkeys.service. Dec 13 13:28:05.114544 systemd[1]: Reached target multi-user.target - Multi-User System. Dec 13 13:28:05.117479 systemd[1]: Startup finished in 1.489s (kernel) + 13.717s (initrd) + 11.811s (userspace) = 27.018s. Dec 13 13:28:05.185429 sshd[1664]: Accepted publickey for core from 139.178.68.195 port 59526 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:05.187745 sshd-session[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:05.194463 systemd-logind[1495]: New session 5 of user core. Dec 13 13:28:05.202587 systemd[1]: Started session-5.scope - Session 5 of User core. Dec 13 13:28:05.807682 sshd[1679]: Connection closed by 139.178.68.195 port 59526 Dec 13 13:28:05.808639 sshd-session[1664]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:05.813182 systemd[1]: sshd@2-10.230.57.46:22-139.178.68.195:59526.service: Deactivated successfully. Dec 13 13:28:05.815170 systemd[1]: session-5.scope: Deactivated successfully. Dec 13 13:28:05.816065 systemd-logind[1495]: Session 5 logged out. Waiting for processes to exit. Dec 13 13:28:05.817460 systemd-logind[1495]: Removed session 5. Dec 13 13:28:10.033982 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Dec 13 13:28:10.041688 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:10.233069 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:10.241844 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:10.312234 kubelet[1691]: E1213 13:28:10.312029 1691 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:10.317162 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:10.317613 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:15.963101 systemd[1]: Started sshd@3-10.230.57.46:22-139.178.68.195:45498.service - OpenSSH per-connection server daemon (139.178.68.195:45498). Dec 13 13:28:16.866556 sshd[1700]: Accepted publickey for core from 139.178.68.195 port 45498 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:16.868557 sshd-session[1700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:16.875645 systemd-logind[1495]: New session 6 of user core. Dec 13 13:28:16.882587 systemd[1]: Started session-6.scope - Session 6 of User core. Dec 13 13:28:17.484452 sshd[1702]: Connection closed by 139.178.68.195 port 45498 Dec 13 13:28:17.485314 sshd-session[1700]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:17.489827 systemd[1]: sshd@3-10.230.57.46:22-139.178.68.195:45498.service: Deactivated successfully. Dec 13 13:28:17.491925 systemd[1]: session-6.scope: Deactivated successfully. Dec 13 13:28:17.492849 systemd-logind[1495]: Session 6 logged out. Waiting for processes to exit. Dec 13 13:28:17.494503 systemd-logind[1495]: Removed session 6. Dec 13 13:28:17.650739 systemd[1]: Started sshd@4-10.230.57.46:22-139.178.68.195:43196.service - OpenSSH per-connection server daemon (139.178.68.195:43196). Dec 13 13:28:18.537155 sshd[1707]: Accepted publickey for core from 139.178.68.195 port 43196 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:18.539511 sshd-session[1707]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:18.547230 systemd-logind[1495]: New session 7 of user core. Dec 13 13:28:18.556605 systemd[1]: Started session-7.scope - Session 7 of User core. Dec 13 13:28:19.150474 sshd[1709]: Connection closed by 139.178.68.195 port 43196 Dec 13 13:28:19.149424 sshd-session[1707]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:19.154492 systemd[1]: sshd@4-10.230.57.46:22-139.178.68.195:43196.service: Deactivated successfully. Dec 13 13:28:19.157315 systemd[1]: session-7.scope: Deactivated successfully. Dec 13 13:28:19.159331 systemd-logind[1495]: Session 7 logged out. Waiting for processes to exit. Dec 13 13:28:19.161071 systemd-logind[1495]: Removed session 7. Dec 13 13:28:19.315875 systemd[1]: Started sshd@5-10.230.57.46:22-139.178.68.195:43210.service - OpenSSH per-connection server daemon (139.178.68.195:43210). Dec 13 13:28:20.207235 sshd[1714]: Accepted publickey for core from 139.178.68.195 port 43210 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:20.209087 sshd-session[1714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:20.215700 systemd-logind[1495]: New session 8 of user core. Dec 13 13:28:20.224584 systemd[1]: Started session-8.scope - Session 8 of User core. Dec 13 13:28:20.567953 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Dec 13 13:28:20.581705 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:20.717824 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:20.732091 (kubelet)[1726]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Dec 13 13:28:20.812857 kubelet[1726]: E1213 13:28:20.812758 1726 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Dec 13 13:28:20.815205 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Dec 13 13:28:20.815479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Dec 13 13:28:20.826466 sshd[1716]: Connection closed by 139.178.68.195 port 43210 Dec 13 13:28:20.827248 sshd-session[1714]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:20.832291 systemd-logind[1495]: Session 8 logged out. Waiting for processes to exit. Dec 13 13:28:20.833476 systemd[1]: sshd@5-10.230.57.46:22-139.178.68.195:43210.service: Deactivated successfully. Dec 13 13:28:20.835967 systemd[1]: session-8.scope: Deactivated successfully. Dec 13 13:28:20.837221 systemd-logind[1495]: Removed session 8. Dec 13 13:28:20.988705 systemd[1]: Started sshd@6-10.230.57.46:22-139.178.68.195:43214.service - OpenSSH per-connection server daemon (139.178.68.195:43214). Dec 13 13:28:21.879261 sshd[1738]: Accepted publickey for core from 139.178.68.195 port 43214 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:21.881229 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:21.888144 systemd-logind[1495]: New session 9 of user core. Dec 13 13:28:21.899645 systemd[1]: Started session-9.scope - Session 9 of User core. Dec 13 13:28:22.368094 sudo[1741]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Dec 13 13:28:22.369200 sudo[1741]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:22.383472 sudo[1741]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:22.526517 sshd[1740]: Connection closed by 139.178.68.195 port 43214 Dec 13 13:28:22.527594 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:22.532017 systemd[1]: sshd@6-10.230.57.46:22-139.178.68.195:43214.service: Deactivated successfully. Dec 13 13:28:22.534209 systemd[1]: session-9.scope: Deactivated successfully. Dec 13 13:28:22.536130 systemd-logind[1495]: Session 9 logged out. Waiting for processes to exit. Dec 13 13:28:22.537722 systemd-logind[1495]: Removed session 9. Dec 13 13:28:22.685782 systemd[1]: Started sshd@7-10.230.57.46:22-139.178.68.195:43216.service - OpenSSH per-connection server daemon (139.178.68.195:43216). Dec 13 13:28:23.572655 sshd[1746]: Accepted publickey for core from 139.178.68.195 port 43216 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:23.574975 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:23.582783 systemd-logind[1495]: New session 10 of user core. Dec 13 13:28:23.593864 systemd[1]: Started session-10.scope - Session 10 of User core. Dec 13 13:28:24.052483 sudo[1750]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Dec 13 13:28:24.053017 sudo[1750]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:24.059585 sudo[1750]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:24.068535 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Dec 13 13:28:24.069004 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:24.091838 systemd[1]: Starting audit-rules.service - Load Audit Rules... Dec 13 13:28:24.132166 augenrules[1772]: No rules Dec 13 13:28:24.134109 systemd[1]: audit-rules.service: Deactivated successfully. Dec 13 13:28:24.134462 systemd[1]: Finished audit-rules.service - Load Audit Rules. Dec 13 13:28:24.135965 sudo[1749]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:24.279160 sshd[1748]: Connection closed by 139.178.68.195 port 43216 Dec 13 13:28:24.280190 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:24.285715 systemd[1]: sshd@7-10.230.57.46:22-139.178.68.195:43216.service: Deactivated successfully. Dec 13 13:28:24.288154 systemd[1]: session-10.scope: Deactivated successfully. Dec 13 13:28:24.289144 systemd-logind[1495]: Session 10 logged out. Waiting for processes to exit. Dec 13 13:28:24.290585 systemd-logind[1495]: Removed session 10. Dec 13 13:28:24.446824 systemd[1]: Started sshd@8-10.230.57.46:22-139.178.68.195:43228.service - OpenSSH per-connection server daemon (139.178.68.195:43228). Dec 13 13:28:25.335782 sshd[1780]: Accepted publickey for core from 139.178.68.195 port 43228 ssh2: RSA SHA256:gikLJyEmpnCHkoekB3AFhFPt08JJAv/T+84MF6KEB0A Dec 13 13:28:25.337623 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Dec 13 13:28:25.344884 systemd-logind[1495]: New session 11 of user core. Dec 13 13:28:25.354628 systemd[1]: Started session-11.scope - Session 11 of User core. Dec 13 13:28:25.815169 sudo[1783]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Dec 13 13:28:25.815704 sudo[1783]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Dec 13 13:28:26.657681 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:26.672745 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:26.702456 systemd[1]: Reloading requested from client PID 1821 ('systemctl') (unit session-11.scope)... Dec 13 13:28:26.702707 systemd[1]: Reloading... Dec 13 13:28:26.850454 zram_generator::config[1864]: No configuration found. Dec 13 13:28:27.025311 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Dec 13 13:28:27.138756 systemd[1]: Reloading finished in 435 ms. Dec 13 13:28:27.218237 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:27.224453 systemd[1]: kubelet.service: Deactivated successfully. Dec 13 13:28:27.224975 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:27.232809 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Dec 13 13:28:27.366260 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Dec 13 13:28:27.377957 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Dec 13 13:28:27.432341 kubelet[1929]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:27.432341 kubelet[1929]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Dec 13 13:28:27.432341 kubelet[1929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Dec 13 13:28:27.432942 kubelet[1929]: I1213 13:28:27.432433 1929 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Dec 13 13:28:27.770522 kubelet[1929]: I1213 13:28:27.769434 1929 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Dec 13 13:28:27.770522 kubelet[1929]: I1213 13:28:27.769472 1929 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Dec 13 13:28:27.770522 kubelet[1929]: I1213 13:28:27.769828 1929 server.go:927] "Client rotation is on, will bootstrap in background" Dec 13 13:28:27.787017 kubelet[1929]: I1213 13:28:27.786249 1929 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Dec 13 13:28:27.808926 kubelet[1929]: I1213 13:28:27.808447 1929 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Dec 13 13:28:27.818350 kubelet[1929]: I1213 13:28:27.817835 1929 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Dec 13 13:28:27.818350 kubelet[1929]: I1213 13:28:27.817923 1929 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.57.46","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Dec 13 13:28:27.825109 kubelet[1929]: I1213 13:28:27.824694 1929 topology_manager.go:138] "Creating topology manager with none policy" Dec 13 13:28:27.825109 kubelet[1929]: I1213 13:28:27.824728 1929 container_manager_linux.go:301] "Creating device plugin manager" Dec 13 13:28:27.825109 kubelet[1929]: I1213 13:28:27.824977 1929 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:27.826626 kubelet[1929]: I1213 13:28:27.826595 1929 kubelet.go:400] "Attempting to sync node with API server" Dec 13 13:28:27.826696 kubelet[1929]: I1213 13:28:27.826641 1929 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Dec 13 13:28:27.826696 kubelet[1929]: I1213 13:28:27.826692 1929 kubelet.go:312] "Adding apiserver pod source" Dec 13 13:28:27.826835 kubelet[1929]: I1213 13:28:27.826752 1929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Dec 13 13:28:27.828022 kubelet[1929]: E1213 13:28:27.827983 1929 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:27.828759 kubelet[1929]: E1213 13:28:27.828191 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:27.831236 kubelet[1929]: I1213 13:28:27.831197 1929 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Dec 13 13:28:27.832919 kubelet[1929]: I1213 13:28:27.832854 1929 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Dec 13 13:28:27.833008 kubelet[1929]: W1213 13:28:27.832988 1929 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Dec 13 13:28:27.834096 kubelet[1929]: I1213 13:28:27.833975 1929 server.go:1264] "Started kubelet" Dec 13 13:28:27.838856 kubelet[1929]: I1213 13:28:27.838462 1929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Dec 13 13:28:27.847163 kubelet[1929]: E1213 13:28:27.845753 1929 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.57.46.1810bf9613afe281 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.57.46,UID:10.230.57.46,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.57.46,},FirstTimestamp:2024-12-13 13:28:27.833934465 +0000 UTC m=+0.451613668,LastTimestamp:2024-12-13 13:28:27.833934465 +0000 UTC m=+0.451613668,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.57.46,}" Dec 13 13:28:27.848182 kubelet[1929]: I1213 13:28:27.848143 1929 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Dec 13 13:28:27.849796 kubelet[1929]: I1213 13:28:27.849772 1929 server.go:455] "Adding debug handlers to kubelet server" Dec 13 13:28:27.851422 kubelet[1929]: I1213 13:28:27.851338 1929 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Dec 13 13:28:27.851791 kubelet[1929]: I1213 13:28:27.851768 1929 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Dec 13 13:28:27.855204 kubelet[1929]: I1213 13:28:27.855182 1929 volume_manager.go:291] "Starting Kubelet Volume Manager" Dec 13 13:28:27.855811 kubelet[1929]: I1213 13:28:27.855787 1929 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Dec 13 13:28:27.856037 kubelet[1929]: I1213 13:28:27.856017 1929 reconciler.go:26] "Reconciler: start to sync state" Dec 13 13:28:27.860347 kubelet[1929]: I1213 13:28:27.860322 1929 factory.go:221] Registration of the systemd container factory successfully Dec 13 13:28:27.860581 kubelet[1929]: I1213 13:28:27.860552 1929 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Dec 13 13:28:27.868173 kubelet[1929]: I1213 13:28:27.867983 1929 factory.go:221] Registration of the containerd container factory successfully Dec 13 13:28:27.868825 kubelet[1929]: E1213 13:28:27.868797 1929 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Dec 13 13:28:27.876585 kubelet[1929]: E1213 13:28:27.876383 1929 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.57.46\" not found" node="10.230.57.46" Dec 13 13:28:27.900837 kubelet[1929]: I1213 13:28:27.900803 1929 cpu_manager.go:214] "Starting CPU manager" policy="none" Dec 13 13:28:27.900837 kubelet[1929]: I1213 13:28:27.900829 1929 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Dec 13 13:28:27.901032 kubelet[1929]: I1213 13:28:27.900859 1929 state_mem.go:36] "Initialized new in-memory state store" Dec 13 13:28:27.904657 kubelet[1929]: I1213 13:28:27.904526 1929 policy_none.go:49] "None policy: Start" Dec 13 13:28:27.905258 kubelet[1929]: I1213 13:28:27.905194 1929 memory_manager.go:170] "Starting memorymanager" policy="None" Dec 13 13:28:27.905258 kubelet[1929]: I1213 13:28:27.905224 1929 state_mem.go:35] "Initializing new in-memory state store" Dec 13 13:28:27.914814 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Dec 13 13:28:27.931889 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Dec 13 13:28:27.938883 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Dec 13 13:28:27.948631 kubelet[1929]: I1213 13:28:27.948597 1929 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Dec 13 13:28:27.948944 kubelet[1929]: I1213 13:28:27.948867 1929 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Dec 13 13:28:27.949085 kubelet[1929]: I1213 13:28:27.949062 1929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Dec 13 13:28:27.953965 kubelet[1929]: I1213 13:28:27.953842 1929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Dec 13 13:28:27.955323 kubelet[1929]: E1213 13:28:27.954703 1929 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.57.46\" not found" Dec 13 13:28:27.956416 kubelet[1929]: I1213 13:28:27.956392 1929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Dec 13 13:28:27.956526 kubelet[1929]: I1213 13:28:27.956432 1929 status_manager.go:217] "Starting to sync pod status with apiserver" Dec 13 13:28:27.956526 kubelet[1929]: I1213 13:28:27.956466 1929 kubelet.go:2337] "Starting kubelet main sync loop" Dec 13 13:28:27.956759 kubelet[1929]: E1213 13:28:27.956532 1929 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Dec 13 13:28:27.956923 kubelet[1929]: I1213 13:28:27.956872 1929 kubelet_node_status.go:73] "Attempting to register node" node="10.230.57.46" Dec 13 13:28:27.976722 kubelet[1929]: I1213 13:28:27.976635 1929 kubelet_node_status.go:76] "Successfully registered node" node="10.230.57.46" Dec 13 13:28:28.087972 kubelet[1929]: I1213 13:28:28.086758 1929 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Dec 13 13:28:28.089628 containerd[1511]: time="2024-12-13T13:28:28.089410352Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Dec 13 13:28:28.090228 kubelet[1929]: I1213 13:28:28.089941 1929 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Dec 13 13:28:28.393788 sudo[1783]: pam_unix(sudo:session): session closed for user root Dec 13 13:28:28.538476 sshd[1782]: Connection closed by 139.178.68.195 port 43228 Dec 13 13:28:28.539522 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Dec 13 13:28:28.544652 systemd[1]: sshd@8-10.230.57.46:22-139.178.68.195:43228.service: Deactivated successfully. Dec 13 13:28:28.548077 systemd[1]: session-11.scope: Deactivated successfully. Dec 13 13:28:28.549357 systemd-logind[1495]: Session 11 logged out. Waiting for processes to exit. Dec 13 13:28:28.551586 systemd-logind[1495]: Removed session 11. Dec 13 13:28:28.773667 kubelet[1929]: I1213 13:28:28.772711 1929 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Dec 13 13:28:28.773667 kubelet[1929]: W1213 13:28:28.773026 1929 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:28:28.773667 kubelet[1929]: W1213 13:28:28.773432 1929 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.Service ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:28:28.773667 kubelet[1929]: W1213 13:28:28.773484 1929 reflector.go:470] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Dec 13 13:28:28.829057 kubelet[1929]: I1213 13:28:28.828993 1929 apiserver.go:52] "Watching apiserver" Dec 13 13:28:28.829577 kubelet[1929]: E1213 13:28:28.829035 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:28.835215 kubelet[1929]: I1213 13:28:28.835173 1929 topology_manager.go:215] "Topology Admit Handler" podUID="b6e02dbc-fc1e-43ce-a7ab-fd96999c6203" podNamespace="kube-system" podName="kube-proxy-98lkl" Dec 13 13:28:28.835363 kubelet[1929]: I1213 13:28:28.835337 1929 topology_manager.go:215] "Topology Admit Handler" podUID="53910cbd-f61b-4662-94e0-c6c2de6020ca" podNamespace="calico-system" podName="calico-node-4jn6f" Dec 13 13:28:28.836850 kubelet[1929]: I1213 13:28:28.835500 1929 topology_manager.go:215] "Topology Admit Handler" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" podNamespace="calico-system" podName="csi-node-driver-dhlrm" Dec 13 13:28:28.836850 kubelet[1929]: E1213 13:28:28.835695 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:28.845862 systemd[1]: Created slice kubepods-besteffort-pod53910cbd_f61b_4662_94e0_c6c2de6020ca.slice - libcontainer container kubepods-besteffort-pod53910cbd_f61b_4662_94e0_c6c2de6020ca.slice. Dec 13 13:28:28.857276 kubelet[1929]: I1213 13:28:28.856765 1929 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Dec 13 13:28:28.861973 kubelet[1929]: I1213 13:28:28.861105 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dhxs\" (UniqueName: \"kubernetes.io/projected/53910cbd-f61b-4662-94e0-c6c2de6020ca-kube-api-access-4dhxs\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.861973 kubelet[1929]: I1213 13:28:28.861161 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d9a9389a-36c6-4bf6-9c2f-53cc83c3820e-varrun\") pod \"csi-node-driver-dhlrm\" (UID: \"d9a9389a-36c6-4bf6-9c2f-53cc83c3820e\") " pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:28.861973 kubelet[1929]: I1213 13:28:28.861190 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6e02dbc-fc1e-43ce-a7ab-fd96999c6203-xtables-lock\") pod \"kube-proxy-98lkl\" (UID: \"b6e02dbc-fc1e-43ce-a7ab-fd96999c6203\") " pod="kube-system/kube-proxy-98lkl" Dec 13 13:28:28.861973 kubelet[1929]: I1213 13:28:28.861215 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-lib-modules\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.861973 kubelet[1929]: I1213 13:28:28.861254 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-xtables-lock\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.861303 systemd[1]: Created slice kubepods-besteffort-podb6e02dbc_fc1e_43ce_a7ab_fd96999c6203.slice - libcontainer container kubepods-besteffort-podb6e02dbc_fc1e_43ce_a7ab_fd96999c6203.slice. Dec 13 13:28:28.862454 kubelet[1929]: I1213 13:28:28.861284 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/53910cbd-f61b-4662-94e0-c6c2de6020ca-node-certs\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862454 kubelet[1929]: I1213 13:28:28.861324 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-var-run-calico\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862454 kubelet[1929]: I1213 13:28:28.861371 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-cni-log-dir\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862454 kubelet[1929]: I1213 13:28:28.861399 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9a9389a-36c6-4bf6-9c2f-53cc83c3820e-kubelet-dir\") pod \"csi-node-driver-dhlrm\" (UID: \"d9a9389a-36c6-4bf6-9c2f-53cc83c3820e\") " pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:28.862454 kubelet[1929]: I1213 13:28:28.861463 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6e02dbc-fc1e-43ce-a7ab-fd96999c6203-lib-modules\") pod \"kube-proxy-98lkl\" (UID: \"b6e02dbc-fc1e-43ce-a7ab-fd96999c6203\") " pod="kube-system/kube-proxy-98lkl" Dec 13 13:28:28.862691 kubelet[1929]: I1213 13:28:28.861517 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53910cbd-f61b-4662-94e0-c6c2de6020ca-tigera-ca-bundle\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862691 kubelet[1929]: I1213 13:28:28.861550 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-cni-net-dir\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862691 kubelet[1929]: I1213 13:28:28.861574 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/b6e02dbc-fc1e-43ce-a7ab-fd96999c6203-kube-proxy\") pod \"kube-proxy-98lkl\" (UID: \"b6e02dbc-fc1e-43ce-a7ab-fd96999c6203\") " pod="kube-system/kube-proxy-98lkl" Dec 13 13:28:28.862691 kubelet[1929]: I1213 13:28:28.861608 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-policysync\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862691 kubelet[1929]: I1213 13:28:28.861632 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-var-lib-calico\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862929 kubelet[1929]: I1213 13:28:28.861666 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-cni-bin-dir\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862929 kubelet[1929]: I1213 13:28:28.861691 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d9a9389a-36c6-4bf6-9c2f-53cc83c3820e-registration-dir\") pod \"csi-node-driver-dhlrm\" (UID: \"d9a9389a-36c6-4bf6-9c2f-53cc83c3820e\") " pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:28.862929 kubelet[1929]: I1213 13:28:28.861743 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-flc7l\" (UniqueName: \"kubernetes.io/projected/b6e02dbc-fc1e-43ce-a7ab-fd96999c6203-kube-api-access-flc7l\") pod \"kube-proxy-98lkl\" (UID: \"b6e02dbc-fc1e-43ce-a7ab-fd96999c6203\") " pod="kube-system/kube-proxy-98lkl" Dec 13 13:28:28.862929 kubelet[1929]: I1213 13:28:28.861771 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/53910cbd-f61b-4662-94e0-c6c2de6020ca-flexvol-driver-host\") pod \"calico-node-4jn6f\" (UID: \"53910cbd-f61b-4662-94e0-c6c2de6020ca\") " pod="calico-system/calico-node-4jn6f" Dec 13 13:28:28.862929 kubelet[1929]: I1213 13:28:28.861851 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d9a9389a-36c6-4bf6-9c2f-53cc83c3820e-socket-dir\") pod \"csi-node-driver-dhlrm\" (UID: \"d9a9389a-36c6-4bf6-9c2f-53cc83c3820e\") " pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:28.863133 kubelet[1929]: I1213 13:28:28.861891 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfgrn\" (UniqueName: \"kubernetes.io/projected/d9a9389a-36c6-4bf6-9c2f-53cc83c3820e-kube-api-access-sfgrn\") pod \"csi-node-driver-dhlrm\" (UID: \"d9a9389a-36c6-4bf6-9c2f-53cc83c3820e\") " pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:28.875812 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Dec 13 13:28:28.967716 kubelet[1929]: E1213 13:28:28.967658 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.968147 kubelet[1929]: W1213 13:28:28.967931 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.968147 kubelet[1929]: E1213 13:28:28.967970 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.968551 kubelet[1929]: E1213 13:28:28.968512 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.968803 kubelet[1929]: W1213 13:28:28.968726 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.968803 kubelet[1929]: E1213 13:28:28.968753 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.969259 kubelet[1929]: E1213 13:28:28.969239 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.969521 kubelet[1929]: W1213 13:28:28.969423 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.970034 kubelet[1929]: E1213 13:28:28.969913 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.970034 kubelet[1929]: W1213 13:28:28.969930 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.970386 kubelet[1929]: E1213 13:28:28.970351 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.970555 kubelet[1929]: W1213 13:28:28.970483 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.970555 kubelet[1929]: E1213 13:28:28.970509 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.971122 kubelet[1929]: E1213 13:28:28.970971 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.971122 kubelet[1929]: W1213 13:28:28.970989 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.971122 kubelet[1929]: E1213 13:28:28.971015 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.971611 kubelet[1929]: E1213 13:28:28.971491 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.971611 kubelet[1929]: W1213 13:28:28.971509 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.971611 kubelet[1929]: E1213 13:28:28.971547 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.973086 kubelet[1929]: E1213 13:28:28.972086 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.973086 kubelet[1929]: W1213 13:28:28.972103 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.973086 kubelet[1929]: E1213 13:28:28.972129 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.973569 kubelet[1929]: E1213 13:28:28.973533 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.973719 kubelet[1929]: W1213 13:28:28.973696 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.973892 kubelet[1929]: E1213 13:28:28.973839 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.975046 kubelet[1929]: E1213 13:28:28.975005 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.975978 kubelet[1929]: E1213 13:28:28.975948 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.976798 kubelet[1929]: W1213 13:28:28.976758 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.977168 kubelet[1929]: E1213 13:28:28.977144 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.978452 kubelet[1929]: E1213 13:28:28.975880 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.978906 kubelet[1929]: E1213 13:28:28.978847 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.979147 kubelet[1929]: W1213 13:28:28.979012 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.979147 kubelet[1929]: E1213 13:28:28.979038 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.980364 kubelet[1929]: E1213 13:28:28.980219 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.980364 kubelet[1929]: W1213 13:28:28.980262 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.980364 kubelet[1929]: E1213 13:28:28.980293 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.981083 kubelet[1929]: E1213 13:28:28.980884 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.981083 kubelet[1929]: W1213 13:28:28.980934 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.981083 kubelet[1929]: E1213 13:28:28.980952 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.982194 kubelet[1929]: E1213 13:28:28.981972 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.982194 kubelet[1929]: W1213 13:28:28.981991 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.982194 kubelet[1929]: E1213 13:28:28.982028 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.993748 kubelet[1929]: E1213 13:28:28.993497 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.993748 kubelet[1929]: W1213 13:28:28.993521 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.993748 kubelet[1929]: E1213 13:28:28.993544 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.996074 kubelet[1929]: E1213 13:28:28.995958 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.996074 kubelet[1929]: W1213 13:28:28.995979 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.996074 kubelet[1929]: E1213 13:28:28.995996 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:28.997902 kubelet[1929]: E1213 13:28:28.997882 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:28.998080 kubelet[1929]: W1213 13:28:28.997977 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:28.998080 kubelet[1929]: E1213 13:28:28.997998 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:29.001628 kubelet[1929]: E1213 13:28:29.001542 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:29.001628 kubelet[1929]: W1213 13:28:29.001561 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:29.001628 kubelet[1929]: E1213 13:28:29.001577 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:29.176299 containerd[1511]: time="2024-12-13T13:28:29.158906879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4jn6f,Uid:53910cbd-f61b-4662-94e0-c6c2de6020ca,Namespace:calico-system,Attempt:0,}" Dec 13 13:28:29.177426 containerd[1511]: time="2024-12-13T13:28:29.176693440Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98lkl,Uid:b6e02dbc-fc1e-43ce-a7ab-fd96999c6203,Namespace:kube-system,Attempt:0,}" Dec 13 13:28:29.829691 kubelet[1929]: E1213 13:28:29.829569 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:30.093790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1850971014.mount: Deactivated successfully. Dec 13 13:28:30.105328 containerd[1511]: time="2024-12-13T13:28:30.105237446Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:30.106876 containerd[1511]: time="2024-12-13T13:28:30.106761585Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:30.107989 containerd[1511]: time="2024-12-13T13:28:30.107943322Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Dec 13 13:28:30.108083 containerd[1511]: time="2024-12-13T13:28:30.107992483Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Dec 13 13:28:30.108875 containerd[1511]: time="2024-12-13T13:28:30.108618304Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:30.112941 containerd[1511]: time="2024-12-13T13:28:30.112865818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Dec 13 13:28:30.114975 containerd[1511]: time="2024-12-13T13:28:30.114118770Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 937.33312ms" Dec 13 13:28:30.117349 containerd[1511]: time="2024-12-13T13:28:30.117070036Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 957.964631ms" Dec 13 13:28:30.302590 containerd[1511]: time="2024-12-13T13:28:30.302097642Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:30.302590 containerd[1511]: time="2024-12-13T13:28:30.302225986Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:30.302590 containerd[1511]: time="2024-12-13T13:28:30.302260805Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:30.302590 containerd[1511]: time="2024-12-13T13:28:30.302416204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:30.308740 containerd[1511]: time="2024-12-13T13:28:30.300791873Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:30.310491 containerd[1511]: time="2024-12-13T13:28:30.310273116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:30.310763 containerd[1511]: time="2024-12-13T13:28:30.310362472Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:30.311163 containerd[1511]: time="2024-12-13T13:28:30.311031716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:30.421655 systemd[1]: Started cri-containerd-a474a5edf538b4f71af4c8ad981e671c64ffba0a6256fda12229048f3565a744.scope - libcontainer container a474a5edf538b4f71af4c8ad981e671c64ffba0a6256fda12229048f3565a744. Dec 13 13:28:30.424495 systemd[1]: Started cri-containerd-a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b.scope - libcontainer container a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b. Dec 13 13:28:30.468782 containerd[1511]: time="2024-12-13T13:28:30.468700622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-98lkl,Uid:b6e02dbc-fc1e-43ce-a7ab-fd96999c6203,Namespace:kube-system,Attempt:0,} returns sandbox id \"a474a5edf538b4f71af4c8ad981e671c64ffba0a6256fda12229048f3565a744\"" Dec 13 13:28:30.472730 containerd[1511]: time="2024-12-13T13:28:30.472699933Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\"" Dec 13 13:28:30.475415 containerd[1511]: time="2024-12-13T13:28:30.475267380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-4jn6f,Uid:53910cbd-f61b-4662-94e0-c6c2de6020ca,Namespace:calico-system,Attempt:0,} returns sandbox id \"a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b\"" Dec 13 13:28:30.830439 kubelet[1929]: E1213 13:28:30.830235 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:30.957615 kubelet[1929]: E1213 13:28:30.957360 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:31.831884 kubelet[1929]: E1213 13:28:31.831779 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:32.030565 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2035930289.mount: Deactivated successfully. Dec 13 13:28:32.683521 containerd[1511]: time="2024-12-13T13:28:32.683449000Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:32.685049 containerd[1511]: time="2024-12-13T13:28:32.684984326Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.8: active requests=0, bytes read=29057478" Dec 13 13:28:32.685679 containerd[1511]: time="2024-12-13T13:28:32.685190097Z" level=info msg="ImageCreate event name:\"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:32.688214 containerd[1511]: time="2024-12-13T13:28:32.688155794Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:32.689647 containerd[1511]: time="2024-12-13T13:28:32.689439960Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.8\" with image id \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\", repo tag \"registry.k8s.io/kube-proxy:v1.30.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:f6d6be9417e22af78905000ac4fd134896bacd2188ea63c7cac8edd7a5d7e9b5\", size \"29056489\" in 2.216527418s" Dec 13 13:28:32.689647 containerd[1511]: time="2024-12-13T13:28:32.689481273Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.8\" returns image reference \"sha256:ce61fda67eb41cf09d2b984e7979e289b5042e3983ddfc67be678425632cc0d2\"" Dec 13 13:28:32.692432 containerd[1511]: time="2024-12-13T13:28:32.692402473Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Dec 13 13:28:32.694045 containerd[1511]: time="2024-12-13T13:28:32.693999553Z" level=info msg="CreateContainer within sandbox \"a474a5edf538b4f71af4c8ad981e671c64ffba0a6256fda12229048f3565a744\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Dec 13 13:28:32.715084 containerd[1511]: time="2024-12-13T13:28:32.715042373Z" level=info msg="CreateContainer within sandbox \"a474a5edf538b4f71af4c8ad981e671c64ffba0a6256fda12229048f3565a744\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"3b6cb876e9f5f3d4ffef84fcc624492076d6d4a36359cc417dda7dd76d556c88\"" Dec 13 13:28:32.716157 containerd[1511]: time="2024-12-13T13:28:32.716128296Z" level=info msg="StartContainer for \"3b6cb876e9f5f3d4ffef84fcc624492076d6d4a36359cc417dda7dd76d556c88\"" Dec 13 13:28:32.769664 systemd[1]: Started cri-containerd-3b6cb876e9f5f3d4ffef84fcc624492076d6d4a36359cc417dda7dd76d556c88.scope - libcontainer container 3b6cb876e9f5f3d4ffef84fcc624492076d6d4a36359cc417dda7dd76d556c88. Dec 13 13:28:32.827307 containerd[1511]: time="2024-12-13T13:28:32.826224155Z" level=info msg="StartContainer for \"3b6cb876e9f5f3d4ffef84fcc624492076d6d4a36359cc417dda7dd76d556c88\" returns successfully" Dec 13 13:28:32.832600 kubelet[1929]: E1213 13:28:32.832541 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:32.957479 kubelet[1929]: E1213 13:28:32.957253 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:32.990453 kubelet[1929]: E1213 13:28:32.990317 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.990453 kubelet[1929]: W1213 13:28:32.990356 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.990453 kubelet[1929]: E1213 13:28:32.990446 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.990879 kubelet[1929]: E1213 13:28:32.990827 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.990879 kubelet[1929]: W1213 13:28:32.990878 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.991013 kubelet[1929]: E1213 13:28:32.990896 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.991352 kubelet[1929]: E1213 13:28:32.991304 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.991352 kubelet[1929]: W1213 13:28:32.991327 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.991529 kubelet[1929]: E1213 13:28:32.991384 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.991797 kubelet[1929]: E1213 13:28:32.991764 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.991797 kubelet[1929]: W1213 13:28:32.991787 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.991945 kubelet[1929]: E1213 13:28:32.991802 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.992159 kubelet[1929]: E1213 13:28:32.992136 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.992240 kubelet[1929]: W1213 13:28:32.992174 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.992240 kubelet[1929]: E1213 13:28:32.992192 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.992604 kubelet[1929]: E1213 13:28:32.992582 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.992604 kubelet[1929]: W1213 13:28:32.992604 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.992748 kubelet[1929]: E1213 13:28:32.992620 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.993007 kubelet[1929]: E1213 13:28:32.992976 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.993090 kubelet[1929]: W1213 13:28:32.993014 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.993090 kubelet[1929]: E1213 13:28:32.993032 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.993429 kubelet[1929]: E1213 13:28:32.993348 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.993429 kubelet[1929]: W1213 13:28:32.993419 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.993546 kubelet[1929]: E1213 13:28:32.993437 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.993897 kubelet[1929]: E1213 13:28:32.993875 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.993897 kubelet[1929]: W1213 13:28:32.993895 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.994054 kubelet[1929]: E1213 13:28:32.993932 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.994233 kubelet[1929]: E1213 13:28:32.994200 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.994233 kubelet[1929]: W1213 13:28:32.994231 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.994362 kubelet[1929]: E1213 13:28:32.994245 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.994585 kubelet[1929]: E1213 13:28:32.994564 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.994585 kubelet[1929]: W1213 13:28:32.994584 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.994711 kubelet[1929]: E1213 13:28:32.994600 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.994892 kubelet[1929]: E1213 13:28:32.994872 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.994892 kubelet[1929]: W1213 13:28:32.994890 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.995037 kubelet[1929]: E1213 13:28:32.994916 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.995259 kubelet[1929]: E1213 13:28:32.995238 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.995259 kubelet[1929]: W1213 13:28:32.995258 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.995435 kubelet[1929]: E1213 13:28:32.995273 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.995624 kubelet[1929]: E1213 13:28:32.995603 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.995624 kubelet[1929]: W1213 13:28:32.995622 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.995748 kubelet[1929]: E1213 13:28:32.995638 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.995903 kubelet[1929]: E1213 13:28:32.995883 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.995973 kubelet[1929]: W1213 13:28:32.995913 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.995973 kubelet[1929]: E1213 13:28:32.995939 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.996231 kubelet[1929]: E1213 13:28:32.996210 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.996231 kubelet[1929]: W1213 13:28:32.996230 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.996345 kubelet[1929]: E1213 13:28:32.996246 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.996566 kubelet[1929]: E1213 13:28:32.996546 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.996566 kubelet[1929]: W1213 13:28:32.996565 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.996698 kubelet[1929]: E1213 13:28:32.996581 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.996859 kubelet[1929]: E1213 13:28:32.996838 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.996859 kubelet[1929]: W1213 13:28:32.996857 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.997004 kubelet[1929]: E1213 13:28:32.996872 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.997160 kubelet[1929]: E1213 13:28:32.997139 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.997160 kubelet[1929]: W1213 13:28:32.997159 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.997285 kubelet[1929]: E1213 13:28:32.997174 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:32.997491 kubelet[1929]: E1213 13:28:32.997470 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:32.997491 kubelet[1929]: W1213 13:28:32.997489 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:32.997646 kubelet[1929]: E1213 13:28:32.997503 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.095198 kubelet[1929]: E1213 13:28:33.094991 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.095198 kubelet[1929]: W1213 13:28:33.095023 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.095198 kubelet[1929]: E1213 13:28:33.095053 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.095947 kubelet[1929]: E1213 13:28:33.095786 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.095947 kubelet[1929]: W1213 13:28:33.095805 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.095947 kubelet[1929]: E1213 13:28:33.095838 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.096532 kubelet[1929]: E1213 13:28:33.096147 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.096532 kubelet[1929]: W1213 13:28:33.096164 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.096532 kubelet[1929]: E1213 13:28:33.096190 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.097277 kubelet[1929]: E1213 13:28:33.097009 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.097277 kubelet[1929]: W1213 13:28:33.097027 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.097277 kubelet[1929]: E1213 13:28:33.097052 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.097712 kubelet[1929]: E1213 13:28:33.097562 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.097712 kubelet[1929]: W1213 13:28:33.097579 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.097712 kubelet[1929]: E1213 13:28:33.097608 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.098789 kubelet[1929]: E1213 13:28:33.098535 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.098789 kubelet[1929]: W1213 13:28:33.098553 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.098789 kubelet[1929]: E1213 13:28:33.098608 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.099237 kubelet[1929]: E1213 13:28:33.099057 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.099237 kubelet[1929]: W1213 13:28:33.099074 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.099237 kubelet[1929]: E1213 13:28:33.099108 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.099879 kubelet[1929]: E1213 13:28:33.099627 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.099879 kubelet[1929]: W1213 13:28:33.099645 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.099879 kubelet[1929]: E1213 13:28:33.099680 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.100429 kubelet[1929]: E1213 13:28:33.100226 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.100429 kubelet[1929]: W1213 13:28:33.100244 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.101102 kubelet[1929]: E1213 13:28:33.100553 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.101416 kubelet[1929]: E1213 13:28:33.101366 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.101551 kubelet[1929]: W1213 13:28:33.101520 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.101704 kubelet[1929]: E1213 13:28:33.101670 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.102095 kubelet[1929]: E1213 13:28:33.102065 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.102095 kubelet[1929]: W1213 13:28:33.102087 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.102216 kubelet[1929]: E1213 13:28:33.102103 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.102823 kubelet[1929]: E1213 13:28:33.102733 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:33.102823 kubelet[1929]: W1213 13:28:33.102763 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:33.102823 kubelet[1929]: E1213 13:28:33.102778 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:33.832807 kubelet[1929]: E1213 13:28:33.832752 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:33.960188 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount229447663.mount: Deactivated successfully. Dec 13 13:28:34.012607 kubelet[1929]: E1213 13:28:34.011743 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.012607 kubelet[1929]: W1213 13:28:34.011786 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.012607 kubelet[1929]: E1213 13:28:34.011818 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.012607 kubelet[1929]: E1213 13:28:34.012507 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.012607 kubelet[1929]: W1213 13:28:34.012520 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.012607 kubelet[1929]: E1213 13:28:34.012535 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.013169 kubelet[1929]: E1213 13:28:34.013144 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.013169 kubelet[1929]: W1213 13:28:34.013166 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.013306 kubelet[1929]: E1213 13:28:34.013183 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.013847 kubelet[1929]: E1213 13:28:34.013479 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.013847 kubelet[1929]: W1213 13:28:34.013499 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.013847 kubelet[1929]: E1213 13:28:34.013594 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.015103 kubelet[1929]: E1213 13:28:34.015079 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.015103 kubelet[1929]: W1213 13:28:34.015099 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.015513 kubelet[1929]: E1213 13:28:34.015116 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.015513 kubelet[1929]: E1213 13:28:34.015481 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.015513 kubelet[1929]: W1213 13:28:34.015496 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.015513 kubelet[1929]: E1213 13:28:34.015510 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.015993 kubelet[1929]: E1213 13:28:34.015865 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.015993 kubelet[1929]: W1213 13:28:34.015888 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.015993 kubelet[1929]: E1213 13:28:34.015903 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.016581 kubelet[1929]: E1213 13:28:34.016557 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.016581 kubelet[1929]: W1213 13:28:34.016577 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.017030 kubelet[1929]: E1213 13:28:34.016593 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.017467 kubelet[1929]: E1213 13:28:34.017088 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.017467 kubelet[1929]: W1213 13:28:34.017229 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.017467 kubelet[1929]: E1213 13:28:34.017251 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.017993 kubelet[1929]: E1213 13:28:34.017969 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.017993 kubelet[1929]: W1213 13:28:34.017991 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.018141 kubelet[1929]: E1213 13:28:34.018007 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.018519 kubelet[1929]: E1213 13:28:34.018468 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.018519 kubelet[1929]: W1213 13:28:34.018488 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.018519 kubelet[1929]: E1213 13:28:34.018506 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.019025 kubelet[1929]: E1213 13:28:34.018998 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.019025 kubelet[1929]: W1213 13:28:34.019013 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.019025 kubelet[1929]: E1213 13:28:34.019028 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.019796 kubelet[1929]: E1213 13:28:34.019675 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.019796 kubelet[1929]: W1213 13:28:34.019701 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.019796 kubelet[1929]: E1213 13:28:34.019716 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.020194 kubelet[1929]: E1213 13:28:34.020171 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.020194 kubelet[1929]: W1213 13:28:34.020191 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.020332 kubelet[1929]: E1213 13:28:34.020207 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.020733 kubelet[1929]: E1213 13:28:34.020712 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.020733 kubelet[1929]: W1213 13:28:34.020730 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.020966 kubelet[1929]: E1213 13:28:34.020746 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.021701 kubelet[1929]: E1213 13:28:34.021484 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.021785 kubelet[1929]: W1213 13:28:34.021702 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.021785 kubelet[1929]: E1213 13:28:34.021719 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.022261 kubelet[1929]: E1213 13:28:34.022238 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.022261 kubelet[1929]: W1213 13:28:34.022258 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.022461 kubelet[1929]: E1213 13:28:34.022276 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.022959 kubelet[1929]: E1213 13:28:34.022918 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.022959 kubelet[1929]: W1213 13:28:34.022957 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.023119 kubelet[1929]: E1213 13:28:34.022974 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.023786 kubelet[1929]: E1213 13:28:34.023661 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.023786 kubelet[1929]: W1213 13:28:34.023768 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.023786 kubelet[1929]: E1213 13:28:34.023788 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.024352 kubelet[1929]: E1213 13:28:34.024320 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.024352 kubelet[1929]: W1213 13:28:34.024350 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.024352 kubelet[1929]: E1213 13:28:34.024365 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.102645 kubelet[1929]: E1213 13:28:34.102577 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.102645 kubelet[1929]: W1213 13:28:34.102639 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.102849 kubelet[1929]: E1213 13:28:34.102682 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.103428 kubelet[1929]: E1213 13:28:34.103391 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.103428 kubelet[1929]: W1213 13:28:34.103425 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.103551 kubelet[1929]: E1213 13:28:34.103459 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.103968 kubelet[1929]: E1213 13:28:34.103907 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.103968 kubelet[1929]: W1213 13:28:34.103950 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.104231 kubelet[1929]: E1213 13:28:34.104195 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.104604 kubelet[1929]: E1213 13:28:34.104557 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.104604 kubelet[1929]: W1213 13:28:34.104600 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.104864 kubelet[1929]: E1213 13:28:34.104803 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.105153 kubelet[1929]: E1213 13:28:34.105126 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.105153 kubelet[1929]: W1213 13:28:34.105150 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.105288 kubelet[1929]: E1213 13:28:34.105261 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.106008 kubelet[1929]: E1213 13:28:34.105889 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.106008 kubelet[1929]: W1213 13:28:34.105916 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.106008 kubelet[1929]: E1213 13:28:34.105961 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.106750 kubelet[1929]: E1213 13:28:34.106706 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.106750 kubelet[1929]: W1213 13:28:34.106742 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.106887 kubelet[1929]: E1213 13:28:34.106798 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.107453 kubelet[1929]: E1213 13:28:34.107411 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.107453 kubelet[1929]: W1213 13:28:34.107446 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.107629 kubelet[1929]: E1213 13:28:34.107596 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.108520 kubelet[1929]: E1213 13:28:34.108487 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.108520 kubelet[1929]: W1213 13:28:34.108509 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.108631 kubelet[1929]: E1213 13:28:34.108533 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.108990 kubelet[1929]: E1213 13:28:34.108920 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.108990 kubelet[1929]: W1213 13:28:34.108962 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.108990 kubelet[1929]: E1213 13:28:34.108979 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.109849 kubelet[1929]: E1213 13:28:34.109803 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.109849 kubelet[1929]: W1213 13:28:34.109839 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.110130 kubelet[1929]: E1213 13:28:34.110102 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.110472 kubelet[1929]: E1213 13:28:34.110448 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Dec 13 13:28:34.110472 kubelet[1929]: W1213 13:28:34.110469 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Dec 13 13:28:34.110587 kubelet[1929]: E1213 13:28:34.110486 1929 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Dec 13 13:28:34.138076 containerd[1511]: time="2024-12-13T13:28:34.138005106Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:34.139469 containerd[1511]: time="2024-12-13T13:28:34.139409046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Dec 13 13:28:34.140806 containerd[1511]: time="2024-12-13T13:28:34.140738815Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:34.144659 containerd[1511]: time="2024-12-13T13:28:34.144617407Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:34.146090 containerd[1511]: time="2024-12-13T13:28:34.145852289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.452853477s" Dec 13 13:28:34.146090 containerd[1511]: time="2024-12-13T13:28:34.145910377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Dec 13 13:28:34.149404 containerd[1511]: time="2024-12-13T13:28:34.149353645Z" level=info msg="CreateContainer within sandbox \"a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Dec 13 13:28:34.187123 containerd[1511]: time="2024-12-13T13:28:34.187033609Z" level=info msg="CreateContainer within sandbox \"a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7\"" Dec 13 13:28:34.187941 containerd[1511]: time="2024-12-13T13:28:34.187891063Z" level=info msg="StartContainer for \"42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7\"" Dec 13 13:28:34.232617 systemd[1]: Started cri-containerd-42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7.scope - libcontainer container 42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7. Dec 13 13:28:34.279008 containerd[1511]: time="2024-12-13T13:28:34.278932115Z" level=info msg="StartContainer for \"42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7\" returns successfully" Dec 13 13:28:34.300271 systemd[1]: cri-containerd-42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7.scope: Deactivated successfully. Dec 13 13:28:34.609225 containerd[1511]: time="2024-12-13T13:28:34.609117223Z" level=info msg="shim disconnected" id=42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7 namespace=k8s.io Dec 13 13:28:34.609678 containerd[1511]: time="2024-12-13T13:28:34.609264176Z" level=warning msg="cleaning up after shim disconnected" id=42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7 namespace=k8s.io Dec 13 13:28:34.609678 containerd[1511]: time="2024-12-13T13:28:34.609290509Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:34.833090 kubelet[1929]: E1213 13:28:34.833019 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:34.873363 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42d0faf027350fd68314438d70f4677cac6b1be14367296bad9a7199389c21e7-rootfs.mount: Deactivated successfully. Dec 13 13:28:34.957355 kubelet[1929]: E1213 13:28:34.956849 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:34.981673 containerd[1511]: time="2024-12-13T13:28:34.981634593Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Dec 13 13:28:35.008012 kubelet[1929]: I1213 13:28:35.007907 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-98lkl" podStartSLOduration=5.788327742 podStartE2EDuration="8.007870132s" podCreationTimestamp="2024-12-13 13:28:27 +0000 UTC" firstStartedPulling="2024-12-13 13:28:30.471861321 +0000 UTC m=+3.089540506" lastFinishedPulling="2024-12-13 13:28:32.6914037 +0000 UTC m=+5.309082896" observedRunningTime="2024-12-13 13:28:33.413572235 +0000 UTC m=+6.031251436" watchObservedRunningTime="2024-12-13 13:28:35.007870132 +0000 UTC m=+7.625549333" Dec 13 13:28:35.833468 kubelet[1929]: E1213 13:28:35.833353 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:36.834046 kubelet[1929]: E1213 13:28:36.833797 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:36.958037 kubelet[1929]: E1213 13:28:36.957534 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:37.834527 kubelet[1929]: E1213 13:28:37.834223 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:38.834983 kubelet[1929]: E1213 13:28:38.834819 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:38.958054 kubelet[1929]: E1213 13:28:38.957978 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:39.835885 kubelet[1929]: E1213 13:28:39.835808 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:39.976432 containerd[1511]: time="2024-12-13T13:28:39.976263559Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:39.980080 containerd[1511]: time="2024-12-13T13:28:39.979880209Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:39.980080 containerd[1511]: time="2024-12-13T13:28:39.980014932Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Dec 13 13:28:39.985597 containerd[1511]: time="2024-12-13T13:28:39.985562033Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:39.986780 containerd[1511]: time="2024-12-13T13:28:39.986732393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.004823897s" Dec 13 13:28:39.986874 containerd[1511]: time="2024-12-13T13:28:39.986792005Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Dec 13 13:28:39.991604 containerd[1511]: time="2024-12-13T13:28:39.991569840Z" level=info msg="CreateContainer within sandbox \"a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Dec 13 13:28:40.012523 containerd[1511]: time="2024-12-13T13:28:40.012467366Z" level=info msg="CreateContainer within sandbox \"a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504\"" Dec 13 13:28:40.013540 containerd[1511]: time="2024-12-13T13:28:40.013489900Z" level=info msg="StartContainer for \"2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504\"" Dec 13 13:28:40.061624 systemd[1]: Started cri-containerd-2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504.scope - libcontainer container 2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504. Dec 13 13:28:40.105704 containerd[1511]: time="2024-12-13T13:28:40.105662530Z" level=info msg="StartContainer for \"2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504\" returns successfully" Dec 13 13:28:40.837889 kubelet[1929]: E1213 13:28:40.837775 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:40.876305 containerd[1511]: time="2024-12-13T13:28:40.876207981Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Dec 13 13:28:40.879293 systemd[1]: cri-containerd-2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504.scope: Deactivated successfully. Dec 13 13:28:40.893120 kubelet[1929]: I1213 13:28:40.892961 1929 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Dec 13 13:28:40.913314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504-rootfs.mount: Deactivated successfully. Dec 13 13:28:40.965721 systemd[1]: Created slice kubepods-besteffort-podd9a9389a_36c6_4bf6_9c2f_53cc83c3820e.slice - libcontainer container kubepods-besteffort-podd9a9389a_36c6_4bf6_9c2f_53cc83c3820e.slice. Dec 13 13:28:41.003874 containerd[1511]: time="2024-12-13T13:28:41.001516117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:0,}" Dec 13 13:28:41.225776 containerd[1511]: time="2024-12-13T13:28:41.225658064Z" level=info msg="shim disconnected" id=2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504 namespace=k8s.io Dec 13 13:28:41.226055 containerd[1511]: time="2024-12-13T13:28:41.225800271Z" level=warning msg="cleaning up after shim disconnected" id=2b71fdede8aa793b82ce53d615980a46ecae226356697c7ccbfe494223630504 namespace=k8s.io Dec 13 13:28:41.226055 containerd[1511]: time="2024-12-13T13:28:41.225824792Z" level=info msg="cleaning up dead shim" namespace=k8s.io Dec 13 13:28:41.318118 containerd[1511]: time="2024-12-13T13:28:41.317993238Z" level=error msg="Failed to destroy network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:41.320247 containerd[1511]: time="2024-12-13T13:28:41.318695312Z" level=error msg="encountered an error cleaning up failed sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:41.320247 containerd[1511]: time="2024-12-13T13:28:41.318852732Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:41.320434 kubelet[1929]: E1213 13:28:41.319219 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:41.320434 kubelet[1929]: E1213 13:28:41.319352 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:41.320434 kubelet[1929]: E1213 13:28:41.319454 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:41.320853 kubelet[1929]: E1213 13:28:41.319593 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:41.321401 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180-shm.mount: Deactivated successfully. Dec 13 13:28:41.838959 kubelet[1929]: E1213 13:28:41.838865 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:42.002507 containerd[1511]: time="2024-12-13T13:28:42.002329014Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Dec 13 13:28:42.002690 kubelet[1929]: I1213 13:28:42.002575 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180" Dec 13 13:28:42.003717 containerd[1511]: time="2024-12-13T13:28:42.003235400Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:42.003717 containerd[1511]: time="2024-12-13T13:28:42.003501041Z" level=info msg="Ensure that sandbox a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180 in task-service has been cleanup successfully" Dec 13 13:28:42.004216 containerd[1511]: time="2024-12-13T13:28:42.003916957Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:42.004216 containerd[1511]: time="2024-12-13T13:28:42.003946249Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:42.006951 containerd[1511]: time="2024-12-13T13:28:42.006914363Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:1,}" Dec 13 13:28:42.007100 systemd[1]: run-netns-cni\x2d67a260bc\x2d3743\x2de78c\x2d14ba\x2d177358cef461.mount: Deactivated successfully. Dec 13 13:28:42.087956 containerd[1511]: time="2024-12-13T13:28:42.085905785Z" level=error msg="Failed to destroy network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:42.087956 containerd[1511]: time="2024-12-13T13:28:42.087794432Z" level=error msg="encountered an error cleaning up failed sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:42.087956 containerd[1511]: time="2024-12-13T13:28:42.087870211Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:42.088541 kubelet[1929]: E1213 13:28:42.088240 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:42.088541 kubelet[1929]: E1213 13:28:42.088321 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:42.088541 kubelet[1929]: E1213 13:28:42.088350 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:42.089246 kubelet[1929]: E1213 13:28:42.088841 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:42.089708 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d-shm.mount: Deactivated successfully. Dec 13 13:28:42.839556 kubelet[1929]: E1213 13:28:42.839493 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:43.005963 kubelet[1929]: I1213 13:28:43.005905 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d" Dec 13 13:28:43.006617 containerd[1511]: time="2024-12-13T13:28:43.006562132Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:43.007153 containerd[1511]: time="2024-12-13T13:28:43.006792090Z" level=info msg="Ensure that sandbox 1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d in task-service has been cleanup successfully" Dec 13 13:28:43.009514 containerd[1511]: time="2024-12-13T13:28:43.007339265Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:43.009514 containerd[1511]: time="2024-12-13T13:28:43.007362001Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:43.009514 containerd[1511]: time="2024-12-13T13:28:43.009194398Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:43.009514 containerd[1511]: time="2024-12-13T13:28:43.009295151Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:43.009514 containerd[1511]: time="2024-12-13T13:28:43.009314721Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:43.010495 containerd[1511]: time="2024-12-13T13:28:43.009986357Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:2,}" Dec 13 13:28:43.010076 systemd[1]: run-netns-cni\x2d522f7882\x2d79d4\x2d5062\x2d6687\x2d77059bb48bcd.mount: Deactivated successfully. Dec 13 13:28:43.088721 containerd[1511]: time="2024-12-13T13:28:43.088612400Z" level=error msg="Failed to destroy network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.091068 containerd[1511]: time="2024-12-13T13:28:43.089321060Z" level=error msg="encountered an error cleaning up failed sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.091068 containerd[1511]: time="2024-12-13T13:28:43.089425816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.091220 kubelet[1929]: E1213 13:28:43.090527 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:43.091220 kubelet[1929]: E1213 13:28:43.090609 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:43.091220 kubelet[1929]: E1213 13:28:43.090639 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:43.091435 kubelet[1929]: E1213 13:28:43.090705 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:43.092056 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396-shm.mount: Deactivated successfully. Dec 13 13:28:43.101865 update_engine[1496]: I20241213 13:28:43.101717 1496 update_attempter.cc:509] Updating boot flags... Dec 13 13:28:43.164421 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2551) Dec 13 13:28:43.277445 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 36 scanned by (udev-worker) (2552) Dec 13 13:28:43.840622 kubelet[1929]: E1213 13:28:43.840538 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:44.012514 kubelet[1929]: I1213 13:28:44.011112 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396" Dec 13 13:28:44.013208 containerd[1511]: time="2024-12-13T13:28:44.012643809Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:44.013208 containerd[1511]: time="2024-12-13T13:28:44.013137733Z" level=info msg="Ensure that sandbox 0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396 in task-service has been cleanup successfully" Dec 13 13:28:44.017038 containerd[1511]: time="2024-12-13T13:28:44.016984162Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:44.017140 containerd[1511]: time="2024-12-13T13:28:44.017039589Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:44.019187 containerd[1511]: time="2024-12-13T13:28:44.017527177Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:44.019187 containerd[1511]: time="2024-12-13T13:28:44.017665263Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:44.019187 containerd[1511]: time="2024-12-13T13:28:44.017685214Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:44.017964 systemd[1]: run-netns-cni\x2ddc898ef7\x2d19cd\x2ddb6e\x2d45d4\x2d23da69e89b45.mount: Deactivated successfully. Dec 13 13:28:44.021015 containerd[1511]: time="2024-12-13T13:28:44.020972209Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:44.021410 containerd[1511]: time="2024-12-13T13:28:44.021095735Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:44.021410 containerd[1511]: time="2024-12-13T13:28:44.021122583Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:44.022402 containerd[1511]: time="2024-12-13T13:28:44.022357081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:3,}" Dec 13 13:28:44.126394 containerd[1511]: time="2024-12-13T13:28:44.123684537Z" level=error msg="Failed to destroy network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.126394 containerd[1511]: time="2024-12-13T13:28:44.124260167Z" level=error msg="encountered an error cleaning up failed sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.126394 containerd[1511]: time="2024-12-13T13:28:44.124392017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.127433 kubelet[1929]: E1213 13:28:44.126826 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:44.127433 kubelet[1929]: E1213 13:28:44.126978 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:44.127433 kubelet[1929]: E1213 13:28:44.127048 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:44.127660 kubelet[1929]: E1213 13:28:44.127130 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:44.128046 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c-shm.mount: Deactivated successfully. Dec 13 13:28:44.842121 kubelet[1929]: E1213 13:28:44.841982 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:45.021408 kubelet[1929]: I1213 13:28:45.021078 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c" Dec 13 13:28:45.024385 containerd[1511]: time="2024-12-13T13:28:45.023428878Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:28:45.024385 containerd[1511]: time="2024-12-13T13:28:45.024111388Z" level=info msg="Ensure that sandbox b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c in task-service has been cleanup successfully" Dec 13 13:28:45.026600 containerd[1511]: time="2024-12-13T13:28:45.026456129Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:28:45.026600 containerd[1511]: time="2024-12-13T13:28:45.026508927Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:28:45.027059 containerd[1511]: time="2024-12-13T13:28:45.027024628Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.027161493Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.027186766Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.027499774Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.027597275Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.027615501Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.027916895Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.028074115Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.028093395Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:45.029390 containerd[1511]: time="2024-12-13T13:28:45.029077702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:4,}" Dec 13 13:28:45.028222 systemd[1]: run-netns-cni\x2d3337dd9e\x2d53e0\x2d2ad7\x2d534a\x2d9a79fb889ea6.mount: Deactivated successfully. Dec 13 13:28:45.140218 containerd[1511]: time="2024-12-13T13:28:45.140132477Z" level=error msg="Failed to destroy network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.141267 containerd[1511]: time="2024-12-13T13:28:45.141232632Z" level=error msg="encountered an error cleaning up failed sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.141532 containerd[1511]: time="2024-12-13T13:28:45.141485271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.144748 kubelet[1929]: E1213 13:28:45.144680 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:45.144854 kubelet[1929]: E1213 13:28:45.144822 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:45.144941 kubelet[1929]: E1213 13:28:45.144898 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:45.145110 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998-shm.mount: Deactivated successfully. Dec 13 13:28:45.145259 kubelet[1929]: E1213 13:28:45.145036 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:45.843069 kubelet[1929]: E1213 13:28:45.842872 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:46.030289 kubelet[1929]: I1213 13:28:46.030016 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998" Dec 13 13:28:46.031147 containerd[1511]: time="2024-12-13T13:28:46.031088617Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:28:46.031956 containerd[1511]: time="2024-12-13T13:28:46.031479545Z" level=info msg="Ensure that sandbox 9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998 in task-service has been cleanup successfully" Dec 13 13:28:46.036647 systemd[1]: run-netns-cni\x2d06760b1f\x2da420\x2d0e1a\x2da77b\x2d25d9bbc708be.mount: Deactivated successfully. Dec 13 13:28:46.037779 containerd[1511]: time="2024-12-13T13:28:46.037632321Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:28:46.037779 containerd[1511]: time="2024-12-13T13:28:46.037665873Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:28:46.038361 containerd[1511]: time="2024-12-13T13:28:46.038204268Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:28:46.038361 containerd[1511]: time="2024-12-13T13:28:46.038323057Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:28:46.038669 containerd[1511]: time="2024-12-13T13:28:46.038344635Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:28:46.039161 containerd[1511]: time="2024-12-13T13:28:46.038904140Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:46.039161 containerd[1511]: time="2024-12-13T13:28:46.039066464Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:46.039161 containerd[1511]: time="2024-12-13T13:28:46.039085314Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:46.039651 containerd[1511]: time="2024-12-13T13:28:46.039612031Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:46.039756 containerd[1511]: time="2024-12-13T13:28:46.039725669Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:46.039756 containerd[1511]: time="2024-12-13T13:28:46.039753433Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:46.040099 containerd[1511]: time="2024-12-13T13:28:46.040049829Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:46.040210 containerd[1511]: time="2024-12-13T13:28:46.040186447Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:46.040344 containerd[1511]: time="2024-12-13T13:28:46.040210117Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:46.041038 containerd[1511]: time="2024-12-13T13:28:46.041004769Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:5,}" Dec 13 13:28:46.158702 containerd[1511]: time="2024-12-13T13:28:46.158611152Z" level=error msg="Failed to destroy network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.160605 containerd[1511]: time="2024-12-13T13:28:46.160547788Z" level=error msg="encountered an error cleaning up failed sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.160694 containerd[1511]: time="2024-12-13T13:28:46.160653398Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.162674 kubelet[1929]: E1213 13:28:46.162470 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.162674 kubelet[1929]: E1213 13:28:46.162627 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:46.164472 kubelet[1929]: E1213 13:28:46.162890 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:46.162913 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5-shm.mount: Deactivated successfully. Dec 13 13:28:46.164731 kubelet[1929]: E1213 13:28:46.163426 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:46.425599 kubelet[1929]: I1213 13:28:46.425351 1929 topology_manager.go:215] "Topology Admit Handler" podUID="0f4afe90-15b3-45bc-a79d-fc560555cfb7" podNamespace="default" podName="nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:46.441195 systemd[1]: Created slice kubepods-besteffort-pod0f4afe90_15b3_45bc_a79d_fc560555cfb7.slice - libcontainer container kubepods-besteffort-pod0f4afe90_15b3_45bc_a79d_fc560555cfb7.slice. Dec 13 13:28:46.589547 kubelet[1929]: I1213 13:28:46.589330 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xjggg\" (UniqueName: \"kubernetes.io/projected/0f4afe90-15b3-45bc-a79d-fc560555cfb7-kube-api-access-xjggg\") pod \"nginx-deployment-85f456d6dd-xbbxq\" (UID: \"0f4afe90-15b3-45bc-a79d-fc560555cfb7\") " pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:46.747486 containerd[1511]: time="2024-12-13T13:28:46.747294584Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:0,}" Dec 13 13:28:46.844088 kubelet[1929]: E1213 13:28:46.844027 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:46.884893 containerd[1511]: time="2024-12-13T13:28:46.884084027Z" level=error msg="Failed to destroy network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.884893 containerd[1511]: time="2024-12-13T13:28:46.884754028Z" level=error msg="encountered an error cleaning up failed sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.884893 containerd[1511]: time="2024-12-13T13:28:46.884858649Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.885884 kubelet[1929]: E1213 13:28:46.885243 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:46.885884 kubelet[1929]: E1213 13:28:46.885352 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:46.885884 kubelet[1929]: E1213 13:28:46.885412 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:46.887446 kubelet[1929]: E1213 13:28:46.887397 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xbbxq" podUID="0f4afe90-15b3-45bc-a79d-fc560555cfb7" Dec 13 13:28:47.039625 kubelet[1929]: I1213 13:28:47.039303 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d" Dec 13 13:28:47.043502 containerd[1511]: time="2024-12-13T13:28:47.043450920Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:28:47.043961 containerd[1511]: time="2024-12-13T13:28:47.043793092Z" level=info msg="Ensure that sandbox 591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d in task-service has been cleanup successfully" Dec 13 13:28:47.046784 containerd[1511]: time="2024-12-13T13:28:47.046102674Z" level=info msg="TearDown network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" successfully" Dec 13 13:28:47.046784 containerd[1511]: time="2024-12-13T13:28:47.046136601Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" returns successfully" Dec 13 13:28:47.046930 systemd[1]: run-netns-cni\x2d657ee88c\x2d38c6\x2de95d\x2d5c79\x2d9c953d73a97b.mount: Deactivated successfully. Dec 13 13:28:47.049510 containerd[1511]: time="2024-12-13T13:28:47.048955519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:1,}" Dec 13 13:28:47.052769 kubelet[1929]: I1213 13:28:47.052730 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5" Dec 13 13:28:47.054412 containerd[1511]: time="2024-12-13T13:28:47.054314506Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:28:47.054593 containerd[1511]: time="2024-12-13T13:28:47.054550146Z" level=info msg="Ensure that sandbox 7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5 in task-service has been cleanup successfully" Dec 13 13:28:47.055462 containerd[1511]: time="2024-12-13T13:28:47.054894039Z" level=info msg="TearDown network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" successfully" Dec 13 13:28:47.055462 containerd[1511]: time="2024-12-13T13:28:47.054922354Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" returns successfully" Dec 13 13:28:47.057541 containerd[1511]: time="2024-12-13T13:28:47.057003366Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:28:47.057541 containerd[1511]: time="2024-12-13T13:28:47.057112714Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:28:47.057541 containerd[1511]: time="2024-12-13T13:28:47.057134596Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:28:47.059209 systemd[1]: run-netns-cni\x2d1d23e12b\x2d79aa\x2d9a0f\x2de6d9\x2d96fce8a49e6e.mount: Deactivated successfully. Dec 13 13:28:47.067700 containerd[1511]: time="2024-12-13T13:28:47.067652603Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:28:47.067797 containerd[1511]: time="2024-12-13T13:28:47.067768373Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:28:47.067797 containerd[1511]: time="2024-12-13T13:28:47.067787698Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:28:47.068683 containerd[1511]: time="2024-12-13T13:28:47.068598689Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:47.068769 containerd[1511]: time="2024-12-13T13:28:47.068714288Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:47.068769 containerd[1511]: time="2024-12-13T13:28:47.068744702Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:47.069393 containerd[1511]: time="2024-12-13T13:28:47.069232038Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:47.069393 containerd[1511]: time="2024-12-13T13:28:47.069331543Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:47.069393 containerd[1511]: time="2024-12-13T13:28:47.069348849Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:47.070641 containerd[1511]: time="2024-12-13T13:28:47.070079320Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:47.070641 containerd[1511]: time="2024-12-13T13:28:47.070189954Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:47.070641 containerd[1511]: time="2024-12-13T13:28:47.070207860Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:47.072026 containerd[1511]: time="2024-12-13T13:28:47.071897329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:6,}" Dec 13 13:28:47.201289 containerd[1511]: time="2024-12-13T13:28:47.201087382Z" level=error msg="Failed to destroy network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.202568 containerd[1511]: time="2024-12-13T13:28:47.202245817Z" level=error msg="encountered an error cleaning up failed sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.204011 containerd[1511]: time="2024-12-13T13:28:47.202356966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.204515 kubelet[1929]: E1213 13:28:47.204316 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.204808 kubelet[1929]: E1213 13:28:47.204491 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:47.204808 kubelet[1929]: E1213 13:28:47.204685 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:47.205197 kubelet[1929]: E1213 13:28:47.205114 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xbbxq" podUID="0f4afe90-15b3-45bc-a79d-fc560555cfb7" Dec 13 13:28:47.281041 containerd[1511]: time="2024-12-13T13:28:47.280893430Z" level=error msg="Failed to destroy network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.282255 containerd[1511]: time="2024-12-13T13:28:47.282027153Z" level=error msg="encountered an error cleaning up failed sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.282255 containerd[1511]: time="2024-12-13T13:28:47.282125323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.283597 kubelet[1929]: E1213 13:28:47.282912 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:47.283597 kubelet[1929]: E1213 13:28:47.283038 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:47.283597 kubelet[1929]: E1213 13:28:47.283070 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:47.283828 kubelet[1929]: E1213 13:28:47.283134 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:47.828434 kubelet[1929]: E1213 13:28:47.828040 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:47.845278 kubelet[1929]: E1213 13:28:47.845174 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:48.040043 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5-shm.mount: Deactivated successfully. Dec 13 13:28:48.060426 kubelet[1929]: I1213 13:28:48.059878 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5" Dec 13 13:28:48.061414 containerd[1511]: time="2024-12-13T13:28:48.061203611Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" Dec 13 13:28:48.104849 containerd[1511]: time="2024-12-13T13:28:48.102674814Z" level=info msg="Ensure that sandbox 7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5 in task-service has been cleanup successfully" Dec 13 13:28:48.105205 containerd[1511]: time="2024-12-13T13:28:48.105169776Z" level=info msg="TearDown network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" successfully" Dec 13 13:28:48.105325 containerd[1511]: time="2024-12-13T13:28:48.105300876Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" returns successfully" Dec 13 13:28:48.107509 systemd[1]: run-netns-cni\x2d9edcc5c5\x2d9613\x2d5a51\x2d22aa\x2d0727a8b69bb4.mount: Deactivated successfully. Dec 13 13:28:48.114398 containerd[1511]: time="2024-12-13T13:28:48.113655230Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:28:48.114398 containerd[1511]: time="2024-12-13T13:28:48.113797058Z" level=info msg="TearDown network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" successfully" Dec 13 13:28:48.114398 containerd[1511]: time="2024-12-13T13:28:48.113862871Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" returns successfully" Dec 13 13:28:48.118109 containerd[1511]: time="2024-12-13T13:28:48.118078287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:2,}" Dec 13 13:28:48.143460 kubelet[1929]: I1213 13:28:48.141930 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a" Dec 13 13:28:48.151758 containerd[1511]: time="2024-12-13T13:28:48.151720054Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" Dec 13 13:28:48.152471 containerd[1511]: time="2024-12-13T13:28:48.152032676Z" level=info msg="Ensure that sandbox 916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a in task-service has been cleanup successfully" Dec 13 13:28:48.157896 systemd[1]: run-netns-cni\x2dac79d4e2\x2d855b\x2d1dfa\x2d5454\x2decf4d1f2764a.mount: Deactivated successfully. Dec 13 13:28:48.160029 containerd[1511]: time="2024-12-13T13:28:48.159998059Z" level=info msg="TearDown network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" successfully" Dec 13 13:28:48.160213 containerd[1511]: time="2024-12-13T13:28:48.160089117Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" returns successfully" Dec 13 13:28:48.163364 containerd[1511]: time="2024-12-13T13:28:48.162887058Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:28:48.163364 containerd[1511]: time="2024-12-13T13:28:48.163041242Z" level=info msg="TearDown network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" successfully" Dec 13 13:28:48.163364 containerd[1511]: time="2024-12-13T13:28:48.163061981Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" returns successfully" Dec 13 13:28:48.163578 containerd[1511]: time="2024-12-13T13:28:48.163450143Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:28:48.163638 containerd[1511]: time="2024-12-13T13:28:48.163602010Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:28:48.163638 containerd[1511]: time="2024-12-13T13:28:48.163623644Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:28:48.166489 containerd[1511]: time="2024-12-13T13:28:48.165804855Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:28:48.166489 containerd[1511]: time="2024-12-13T13:28:48.165917488Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:28:48.166489 containerd[1511]: time="2024-12-13T13:28:48.165936749Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:28:48.166699 containerd[1511]: time="2024-12-13T13:28:48.166616753Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:48.166764 containerd[1511]: time="2024-12-13T13:28:48.166746896Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:48.166828 containerd[1511]: time="2024-12-13T13:28:48.166764862Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:48.168639 containerd[1511]: time="2024-12-13T13:28:48.168049254Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:48.168986 containerd[1511]: time="2024-12-13T13:28:48.168937620Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:48.169184 containerd[1511]: time="2024-12-13T13:28:48.168965020Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:48.170394 containerd[1511]: time="2024-12-13T13:28:48.170214507Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:48.170394 containerd[1511]: time="2024-12-13T13:28:48.170331742Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:48.170394 containerd[1511]: time="2024-12-13T13:28:48.170351677Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:48.171427 containerd[1511]: time="2024-12-13T13:28:48.171123400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:7,}" Dec 13 13:28:48.315148 containerd[1511]: time="2024-12-13T13:28:48.314962321Z" level=error msg="Failed to destroy network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.317420 containerd[1511]: time="2024-12-13T13:28:48.316685681Z" level=error msg="encountered an error cleaning up failed sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.317420 containerd[1511]: time="2024-12-13T13:28:48.316841201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.317599 kubelet[1929]: E1213 13:28:48.317246 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.317599 kubelet[1929]: E1213 13:28:48.317321 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:48.318508 kubelet[1929]: E1213 13:28:48.317353 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:48.318508 kubelet[1929]: E1213 13:28:48.318138 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:48.323360 containerd[1511]: time="2024-12-13T13:28:48.321909896Z" level=error msg="Failed to destroy network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.323360 containerd[1511]: time="2024-12-13T13:28:48.322699776Z" level=error msg="encountered an error cleaning up failed sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.323360 containerd[1511]: time="2024-12-13T13:28:48.322775351Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.323607 kubelet[1929]: E1213 13:28:48.322981 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:48.323607 kubelet[1929]: E1213 13:28:48.323030 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:48.323607 kubelet[1929]: E1213 13:28:48.323057 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:48.323766 kubelet[1929]: E1213 13:28:48.323100 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xbbxq" podUID="0f4afe90-15b3-45bc-a79d-fc560555cfb7" Dec 13 13:28:48.848693 kubelet[1929]: E1213 13:28:48.848470 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:49.040920 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e-shm.mount: Deactivated successfully. Dec 13 13:28:49.153162 kubelet[1929]: I1213 13:28:49.152953 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e" Dec 13 13:28:49.155745 containerd[1511]: time="2024-12-13T13:28:49.155667662Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\"" Dec 13 13:28:49.157365 containerd[1511]: time="2024-12-13T13:28:49.157079864Z" level=info msg="Ensure that sandbox df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e in task-service has been cleanup successfully" Dec 13 13:28:49.157897 containerd[1511]: time="2024-12-13T13:28:49.157580492Z" level=info msg="TearDown network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" successfully" Dec 13 13:28:49.157897 containerd[1511]: time="2024-12-13T13:28:49.157622576Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" returns successfully" Dec 13 13:28:49.158740 containerd[1511]: time="2024-12-13T13:28:49.158219551Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" Dec 13 13:28:49.158740 containerd[1511]: time="2024-12-13T13:28:49.158342401Z" level=info msg="TearDown network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" successfully" Dec 13 13:28:49.158740 containerd[1511]: time="2024-12-13T13:28:49.158361672Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" returns successfully" Dec 13 13:28:49.159191 containerd[1511]: time="2024-12-13T13:28:49.159160319Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:28:49.159435 containerd[1511]: time="2024-12-13T13:28:49.159399938Z" level=info msg="TearDown network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" successfully" Dec 13 13:28:49.159563 containerd[1511]: time="2024-12-13T13:28:49.159539609Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" returns successfully" Dec 13 13:28:49.160741 containerd[1511]: time="2024-12-13T13:28:49.160712796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:3,}" Dec 13 13:28:49.163346 systemd[1]: run-netns-cni\x2dedc33e25\x2dcc5a\x2d9a2a\x2defe0\x2d5728e5d3c590.mount: Deactivated successfully. Dec 13 13:28:49.165940 kubelet[1929]: I1213 13:28:49.165229 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf" Dec 13 13:28:49.167158 containerd[1511]: time="2024-12-13T13:28:49.166936836Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\"" Dec 13 13:28:49.167271 containerd[1511]: time="2024-12-13T13:28:49.167177778Z" level=info msg="Ensure that sandbox 6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf in task-service has been cleanup successfully" Dec 13 13:28:49.168496 containerd[1511]: time="2024-12-13T13:28:49.168458164Z" level=info msg="TearDown network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" successfully" Dec 13 13:28:49.168496 containerd[1511]: time="2024-12-13T13:28:49.168491505Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" returns successfully" Dec 13 13:28:49.170446 containerd[1511]: time="2024-12-13T13:28:49.170300930Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" Dec 13 13:28:49.171059 containerd[1511]: time="2024-12-13T13:28:49.170995188Z" level=info msg="TearDown network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" successfully" Dec 13 13:28:49.171059 containerd[1511]: time="2024-12-13T13:28:49.171025186Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" returns successfully" Dec 13 13:28:49.173715 containerd[1511]: time="2024-12-13T13:28:49.172605621Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:28:49.173715 containerd[1511]: time="2024-12-13T13:28:49.172722755Z" level=info msg="TearDown network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" successfully" Dec 13 13:28:49.173715 containerd[1511]: time="2024-12-13T13:28:49.172762780Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" returns successfully" Dec 13 13:28:49.173715 containerd[1511]: time="2024-12-13T13:28:49.173079673Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:28:49.173715 containerd[1511]: time="2024-12-13T13:28:49.173177145Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:28:49.173715 containerd[1511]: time="2024-12-13T13:28:49.173195155Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:28:49.171578 systemd[1]: run-netns-cni\x2d28111081\x2d7ed0\x2dea81\x2d638d\x2d59af605a1f51.mount: Deactivated successfully. Dec 13 13:28:49.174149 containerd[1511]: time="2024-12-13T13:28:49.173818196Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:28:49.174149 containerd[1511]: time="2024-12-13T13:28:49.173933960Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:28:49.174149 containerd[1511]: time="2024-12-13T13:28:49.173951267Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:28:49.174589 containerd[1511]: time="2024-12-13T13:28:49.174339379Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:49.174589 containerd[1511]: time="2024-12-13T13:28:49.174495412Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:49.174589 containerd[1511]: time="2024-12-13T13:28:49.174514519Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:49.174992 containerd[1511]: time="2024-12-13T13:28:49.174894624Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:49.175056 containerd[1511]: time="2024-12-13T13:28:49.175037757Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:49.175410 containerd[1511]: time="2024-12-13T13:28:49.175056847Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:49.175806 containerd[1511]: time="2024-12-13T13:28:49.175749392Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:49.175924 containerd[1511]: time="2024-12-13T13:28:49.175859359Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:49.175924 containerd[1511]: time="2024-12-13T13:28:49.175897384Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:49.181310 containerd[1511]: time="2024-12-13T13:28:49.181266797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:8,}" Dec 13 13:28:49.361247 containerd[1511]: time="2024-12-13T13:28:49.361151625Z" level=error msg="Failed to destroy network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:49.362227 containerd[1511]: time="2024-12-13T13:28:49.362191691Z" level=error msg="encountered an error cleaning up failed sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:49.362469 containerd[1511]: time="2024-12-13T13:28:49.362353550Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:49.363400 kubelet[1929]: E1213 13:28:49.362834 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:49.363400 kubelet[1929]: E1213 13:28:49.363007 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:49.363400 kubelet[1929]: E1213 13:28:49.363062 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:49.363621 kubelet[1929]: E1213 13:28:49.363151 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:49.372790 containerd[1511]: time="2024-12-13T13:28:49.372404690Z" level=error msg="Failed to destroy network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:49.373530 containerd[1511]: time="2024-12-13T13:28:49.373347985Z" level=error msg="encountered an error cleaning up failed sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:49.374596 containerd[1511]: time="2024-12-13T13:28:49.373965597Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:49.374768 kubelet[1929]: E1213 13:28:49.374191 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:49.374768 kubelet[1929]: E1213 13:28:49.374231 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:49.374768 kubelet[1929]: E1213 13:28:49.374256 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:49.374977 kubelet[1929]: E1213 13:28:49.374298 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xbbxq" podUID="0f4afe90-15b3-45bc-a79d-fc560555cfb7" Dec 13 13:28:49.849798 kubelet[1929]: E1213 13:28:49.849704 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:50.038418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba-shm.mount: Deactivated successfully. Dec 13 13:28:50.175569 kubelet[1929]: I1213 13:28:50.174585 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b" Dec 13 13:28:50.176796 containerd[1511]: time="2024-12-13T13:28:50.176282408Z" level=info msg="StopPodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\"" Dec 13 13:28:50.178226 containerd[1511]: time="2024-12-13T13:28:50.177541766Z" level=info msg="Ensure that sandbox 42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b in task-service has been cleanup successfully" Dec 13 13:28:50.181499 containerd[1511]: time="2024-12-13T13:28:50.181407133Z" level=info msg="TearDown network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" successfully" Dec 13 13:28:50.181499 containerd[1511]: time="2024-12-13T13:28:50.181448637Z" level=info msg="StopPodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" returns successfully" Dec 13 13:28:50.182072 systemd[1]: run-netns-cni\x2d41cc72cd\x2d393c\x2db945\x2dc2c2\x2d71a9a5f07ee3.mount: Deactivated successfully. Dec 13 13:28:50.183553 containerd[1511]: time="2024-12-13T13:28:50.183514024Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\"" Dec 13 13:28:50.183733 containerd[1511]: time="2024-12-13T13:28:50.183642492Z" level=info msg="TearDown network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" successfully" Dec 13 13:28:50.183797 containerd[1511]: time="2024-12-13T13:28:50.183730438Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" returns successfully" Dec 13 13:28:50.185030 containerd[1511]: time="2024-12-13T13:28:50.184998358Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" Dec 13 13:28:50.185788 containerd[1511]: time="2024-12-13T13:28:50.185741542Z" level=info msg="TearDown network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" successfully" Dec 13 13:28:50.186097 containerd[1511]: time="2024-12-13T13:28:50.185925457Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" returns successfully" Dec 13 13:28:50.187392 containerd[1511]: time="2024-12-13T13:28:50.187341608Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:28:50.187837 containerd[1511]: time="2024-12-13T13:28:50.187595243Z" level=info msg="TearDown network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" successfully" Dec 13 13:28:50.187837 containerd[1511]: time="2024-12-13T13:28:50.187622314Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" returns successfully" Dec 13 13:28:50.188092 kubelet[1929]: I1213 13:28:50.187901 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba" Dec 13 13:28:50.189645 containerd[1511]: time="2024-12-13T13:28:50.189605054Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:28:50.190529 containerd[1511]: time="2024-12-13T13:28:50.189874895Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:28:50.190529 containerd[1511]: time="2024-12-13T13:28:50.189902466Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:28:50.190529 containerd[1511]: time="2024-12-13T13:28:50.190093394Z" level=info msg="StopPodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\"" Dec 13 13:28:50.190529 containerd[1511]: time="2024-12-13T13:28:50.190324358Z" level=info msg="Ensure that sandbox 8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba in task-service has been cleanup successfully" Dec 13 13:28:50.191531 containerd[1511]: time="2024-12-13T13:28:50.191503408Z" level=info msg="TearDown network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" successfully" Dec 13 13:28:50.191697 containerd[1511]: time="2024-12-13T13:28:50.191670876Z" level=info msg="StopPodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" returns successfully" Dec 13 13:28:50.193448 containerd[1511]: time="2024-12-13T13:28:50.193416205Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:28:50.193750 containerd[1511]: time="2024-12-13T13:28:50.193702049Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:28:50.194165 containerd[1511]: time="2024-12-13T13:28:50.194010728Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:28:50.194445 containerd[1511]: time="2024-12-13T13:28:50.194141080Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\"" Dec 13 13:28:50.195056 containerd[1511]: time="2024-12-13T13:28:50.195027021Z" level=info msg="TearDown network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" successfully" Dec 13 13:28:50.195589 containerd[1511]: time="2024-12-13T13:28:50.195409277Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" returns successfully" Dec 13 13:28:50.196226 systemd[1]: run-netns-cni\x2dd348952a\x2d36a0\x2db9bf\x2d5187\x2d7f430aa8a779.mount: Deactivated successfully. Dec 13 13:28:50.197678 containerd[1511]: time="2024-12-13T13:28:50.197170424Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:50.197678 containerd[1511]: time="2024-12-13T13:28:50.197285541Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:50.197678 containerd[1511]: time="2024-12-13T13:28:50.197303903Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:50.197678 containerd[1511]: time="2024-12-13T13:28:50.197427773Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" Dec 13 13:28:50.197678 containerd[1511]: time="2024-12-13T13:28:50.197556558Z" level=info msg="TearDown network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" successfully" Dec 13 13:28:50.197678 containerd[1511]: time="2024-12-13T13:28:50.197575996Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" returns successfully" Dec 13 13:28:50.198451 containerd[1511]: time="2024-12-13T13:28:50.198127943Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:50.198451 containerd[1511]: time="2024-12-13T13:28:50.198251715Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:50.198451 containerd[1511]: time="2024-12-13T13:28:50.198271537Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:50.198451 containerd[1511]: time="2024-12-13T13:28:50.198350963Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:28:50.199694 containerd[1511]: time="2024-12-13T13:28:50.199631601Z" level=info msg="TearDown network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" successfully" Dec 13 13:28:50.199694 containerd[1511]: time="2024-12-13T13:28:50.199672345Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" returns successfully" Dec 13 13:28:50.199970 containerd[1511]: time="2024-12-13T13:28:50.199872893Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:50.200107 containerd[1511]: time="2024-12-13T13:28:50.199980430Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:50.200107 containerd[1511]: time="2024-12-13T13:28:50.199998591Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:50.200980 containerd[1511]: time="2024-12-13T13:28:50.200883223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:9,}" Dec 13 13:28:50.201937 containerd[1511]: time="2024-12-13T13:28:50.201675100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:4,}" Dec 13 13:28:50.404852 containerd[1511]: time="2024-12-13T13:28:50.404628557Z" level=error msg="Failed to destroy network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:50.411395 containerd[1511]: time="2024-12-13T13:28:50.408627071Z" level=error msg="encountered an error cleaning up failed sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:50.411395 containerd[1511]: time="2024-12-13T13:28:50.408732683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:50.411614 kubelet[1929]: E1213 13:28:50.409077 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:50.411614 kubelet[1929]: E1213 13:28:50.409174 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:50.411614 kubelet[1929]: E1213 13:28:50.409219 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:50.411872 kubelet[1929]: E1213 13:28:50.409273 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xbbxq" podUID="0f4afe90-15b3-45bc-a79d-fc560555cfb7" Dec 13 13:28:50.422306 containerd[1511]: time="2024-12-13T13:28:50.421353352Z" level=error msg="Failed to destroy network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:50.422306 containerd[1511]: time="2024-12-13T13:28:50.422045838Z" level=error msg="encountered an error cleaning up failed sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:50.422306 containerd[1511]: time="2024-12-13T13:28:50.422113663Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:50.422572 kubelet[1929]: E1213 13:28:50.422327 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:50.422671 kubelet[1929]: E1213 13:28:50.422621 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:50.422770 kubelet[1929]: E1213 13:28:50.422680 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:50.423089 kubelet[1929]: E1213 13:28:50.423047 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:50.851175 kubelet[1929]: E1213 13:28:50.851083 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:51.038405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e-shm.mount: Deactivated successfully. Dec 13 13:28:51.195950 kubelet[1929]: I1213 13:28:51.195145 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e" Dec 13 13:28:51.197030 containerd[1511]: time="2024-12-13T13:28:51.196968179Z" level=info msg="StopPodSandbox for \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\"" Dec 13 13:28:51.197657 containerd[1511]: time="2024-12-13T13:28:51.197270947Z" level=info msg="Ensure that sandbox 0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e in task-service has been cleanup successfully" Dec 13 13:28:51.200139 systemd[1]: run-netns-cni\x2dd5244e71\x2dd867\x2d5076\x2d8f60\x2d63ec292e4b2d.mount: Deactivated successfully. Dec 13 13:28:51.201605 containerd[1511]: time="2024-12-13T13:28:51.201501317Z" level=info msg="TearDown network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\" successfully" Dec 13 13:28:51.201605 containerd[1511]: time="2024-12-13T13:28:51.201531807Z" level=info msg="StopPodSandbox for \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\" returns successfully" Dec 13 13:28:51.202190 containerd[1511]: time="2024-12-13T13:28:51.202154360Z" level=info msg="StopPodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\"" Dec 13 13:28:51.202289 containerd[1511]: time="2024-12-13T13:28:51.202259271Z" level=info msg="TearDown network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" successfully" Dec 13 13:28:51.202387 containerd[1511]: time="2024-12-13T13:28:51.202290552Z" level=info msg="StopPodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" returns successfully" Dec 13 13:28:51.204228 containerd[1511]: time="2024-12-13T13:28:51.203716117Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\"" Dec 13 13:28:51.204228 containerd[1511]: time="2024-12-13T13:28:51.203821030Z" level=info msg="TearDown network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" successfully" Dec 13 13:28:51.204228 containerd[1511]: time="2024-12-13T13:28:51.203839190Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" returns successfully" Dec 13 13:28:51.204683 containerd[1511]: time="2024-12-13T13:28:51.204639139Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" Dec 13 13:28:51.204802 containerd[1511]: time="2024-12-13T13:28:51.204777359Z" level=info msg="TearDown network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" successfully" Dec 13 13:28:51.204878 containerd[1511]: time="2024-12-13T13:28:51.204805033Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" returns successfully" Dec 13 13:28:51.206552 containerd[1511]: time="2024-12-13T13:28:51.206511149Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:28:51.207014 containerd[1511]: time="2024-12-13T13:28:51.206741165Z" level=info msg="TearDown network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" successfully" Dec 13 13:28:51.207014 containerd[1511]: time="2024-12-13T13:28:51.206765630Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" returns successfully" Dec 13 13:28:51.207515 containerd[1511]: time="2024-12-13T13:28:51.207485247Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:28:51.207656 containerd[1511]: time="2024-12-13T13:28:51.207619713Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:28:51.207727 containerd[1511]: time="2024-12-13T13:28:51.207656348Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:28:51.207997 kubelet[1929]: I1213 13:28:51.207970 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941" Dec 13 13:28:51.209560 containerd[1511]: time="2024-12-13T13:28:51.209452869Z" level=info msg="StopPodSandbox for \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\"" Dec 13 13:28:51.210615 containerd[1511]: time="2024-12-13T13:28:51.210571805Z" level=info msg="Ensure that sandbox aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941 in task-service has been cleanup successfully" Dec 13 13:28:51.212793 systemd[1]: run-netns-cni\x2d28112345\x2d111d\x2daea9\x2d8ae2\x2d99d661c95d0d.mount: Deactivated successfully. Dec 13 13:28:51.214422 containerd[1511]: time="2024-12-13T13:28:51.214346349Z" level=info msg="TearDown network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\" successfully" Dec 13 13:28:51.215159 containerd[1511]: time="2024-12-13T13:28:51.214410714Z" level=info msg="StopPodSandbox for \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\" returns successfully" Dec 13 13:28:51.215159 containerd[1511]: time="2024-12-13T13:28:51.214964835Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:28:51.215268 containerd[1511]: time="2024-12-13T13:28:51.215238404Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:28:51.215268 containerd[1511]: time="2024-12-13T13:28:51.215259009Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:28:51.216634 containerd[1511]: time="2024-12-13T13:28:51.216592035Z" level=info msg="StopPodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\"" Dec 13 13:28:51.216751 containerd[1511]: time="2024-12-13T13:28:51.216718200Z" level=info msg="TearDown network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" successfully" Dec 13 13:28:51.216751 containerd[1511]: time="2024-12-13T13:28:51.216744523Z" level=info msg="StopPodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" returns successfully" Dec 13 13:28:51.216869 containerd[1511]: time="2024-12-13T13:28:51.216815437Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:51.216952 containerd[1511]: time="2024-12-13T13:28:51.216901918Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:51.217046 containerd[1511]: time="2024-12-13T13:28:51.216950440Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:51.219117 containerd[1511]: time="2024-12-13T13:28:51.219087102Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\"" Dec 13 13:28:51.219215 containerd[1511]: time="2024-12-13T13:28:51.219189637Z" level=info msg="TearDown network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" successfully" Dec 13 13:28:51.219316 containerd[1511]: time="2024-12-13T13:28:51.219213909Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" returns successfully" Dec 13 13:28:51.219316 containerd[1511]: time="2024-12-13T13:28:51.219287666Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:51.219445 containerd[1511]: time="2024-12-13T13:28:51.219412585Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:51.219445 containerd[1511]: time="2024-12-13T13:28:51.219431729Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:51.220353 containerd[1511]: time="2024-12-13T13:28:51.219962204Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" Dec 13 13:28:51.220353 containerd[1511]: time="2024-12-13T13:28:51.220068821Z" level=info msg="TearDown network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" successfully" Dec 13 13:28:51.220353 containerd[1511]: time="2024-12-13T13:28:51.220088319Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" returns successfully" Dec 13 13:28:51.220778 containerd[1511]: time="2024-12-13T13:28:51.220741707Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:28:51.220979 containerd[1511]: time="2024-12-13T13:28:51.220951766Z" level=info msg="TearDown network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" successfully" Dec 13 13:28:51.221089 containerd[1511]: time="2024-12-13T13:28:51.221063699Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" returns successfully" Dec 13 13:28:51.221253 containerd[1511]: time="2024-12-13T13:28:51.221226311Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:51.221968 containerd[1511]: time="2024-12-13T13:28:51.221446631Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:51.221968 containerd[1511]: time="2024-12-13T13:28:51.221469747Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:51.222359 containerd[1511]: time="2024-12-13T13:28:51.222328728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:5,}" Dec 13 13:28:51.224380 containerd[1511]: time="2024-12-13T13:28:51.224328596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:10,}" Dec 13 13:28:51.454341 containerd[1511]: time="2024-12-13T13:28:51.453773849Z" level=error msg="Failed to destroy network for sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:51.458245 containerd[1511]: time="2024-12-13T13:28:51.457923620Z" level=error msg="encountered an error cleaning up failed sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:51.458731 containerd[1511]: time="2024-12-13T13:28:51.458683145Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:51.460655 kubelet[1929]: E1213 13:28:51.459709 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:51.460655 kubelet[1929]: E1213 13:28:51.459830 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:51.460655 kubelet[1929]: E1213 13:28:51.459862 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-xbbxq" Dec 13 13:28:51.460965 kubelet[1929]: E1213 13:28:51.459969 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-xbbxq_default(0f4afe90-15b3-45bc-a79d-fc560555cfb7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-xbbxq" podUID="0f4afe90-15b3-45bc-a79d-fc560555cfb7" Dec 13 13:28:51.465210 containerd[1511]: time="2024-12-13T13:28:51.465123338Z" level=error msg="Failed to destroy network for sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:51.466460 containerd[1511]: time="2024-12-13T13:28:51.466412033Z" level=error msg="encountered an error cleaning up failed sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:51.467765 containerd[1511]: time="2024-12-13T13:28:51.467699442Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:10,} failed, error" error="failed to setup network for sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:51.469992 kubelet[1929]: E1213 13:28:51.469550 1929 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Dec 13 13:28:51.469992 kubelet[1929]: E1213 13:28:51.469600 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:51.469992 kubelet[1929]: E1213 13:28:51.469629 1929 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dhlrm" Dec 13 13:28:51.470185 kubelet[1929]: E1213 13:28:51.469752 1929 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dhlrm_calico-system(d9a9389a-36c6-4bf6-9c2f-53cc83c3820e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dhlrm" podUID="d9a9389a-36c6-4bf6-9c2f-53cc83c3820e" Dec 13 13:28:51.633213 containerd[1511]: time="2024-12-13T13:28:51.633143005Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:51.634401 containerd[1511]: time="2024-12-13T13:28:51.634291712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Dec 13 13:28:51.635495 containerd[1511]: time="2024-12-13T13:28:51.635439080Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:51.641518 containerd[1511]: time="2024-12-13T13:28:51.641420969Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 9.638968538s" Dec 13 13:28:51.641518 containerd[1511]: time="2024-12-13T13:28:51.641490109Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Dec 13 13:28:51.642089 containerd[1511]: time="2024-12-13T13:28:51.641832740Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:51.670877 containerd[1511]: time="2024-12-13T13:28:51.670691867Z" level=info msg="CreateContainer within sandbox \"a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Dec 13 13:28:51.692617 containerd[1511]: time="2024-12-13T13:28:51.692438177Z" level=info msg="CreateContainer within sandbox \"a4cac69886b00b2c3361eef0738ecac4d8e67ebde29f85797559d6ecc147095b\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"5256e267445d264a9efabb8dfc63fcdfcdf0246d339bb5ce6d89aa1b91617055\"" Dec 13 13:28:51.693317 containerd[1511]: time="2024-12-13T13:28:51.693136155Z" level=info msg="StartContainer for \"5256e267445d264a9efabb8dfc63fcdfcdf0246d339bb5ce6d89aa1b91617055\"" Dec 13 13:28:51.807717 systemd[1]: Started cri-containerd-5256e267445d264a9efabb8dfc63fcdfcdf0246d339bb5ce6d89aa1b91617055.scope - libcontainer container 5256e267445d264a9efabb8dfc63fcdfcdf0246d339bb5ce6d89aa1b91617055. Dec 13 13:28:51.851300 kubelet[1929]: E1213 13:28:51.851245 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:51.875511 containerd[1511]: time="2024-12-13T13:28:51.874900820Z" level=info msg="StartContainer for \"5256e267445d264a9efabb8dfc63fcdfcdf0246d339bb5ce6d89aa1b91617055\" returns successfully" Dec 13 13:28:51.991348 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Dec 13 13:28:51.991590 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Dec 13 13:28:52.040984 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1-shm.mount: Deactivated successfully. Dec 13 13:28:52.041159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3952083240.mount: Deactivated successfully. Dec 13 13:28:52.226289 kubelet[1929]: I1213 13:28:52.226219 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1" Dec 13 13:28:52.231419 containerd[1511]: time="2024-12-13T13:28:52.228287181Z" level=info msg="StopPodSandbox for \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\"" Dec 13 13:28:52.231419 containerd[1511]: time="2024-12-13T13:28:52.228862327Z" level=info msg="Ensure that sandbox 50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1 in task-service has been cleanup successfully" Dec 13 13:28:52.233002 containerd[1511]: time="2024-12-13T13:28:52.232964961Z" level=info msg="TearDown network for sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\" successfully" Dec 13 13:28:52.233253 containerd[1511]: time="2024-12-13T13:28:52.233211472Z" level=info msg="StopPodSandbox for \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\" returns successfully" Dec 13 13:28:52.233598 systemd[1]: run-netns-cni\x2d76dfdb61\x2d8ef2\x2d5bc0\x2d3bb9\x2d6f18f1523549.mount: Deactivated successfully. Dec 13 13:28:52.234008 containerd[1511]: time="2024-12-13T13:28:52.233842889Z" level=info msg="StopPodSandbox for \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\"" Dec 13 13:28:52.234008 containerd[1511]: time="2024-12-13T13:28:52.233973162Z" level=info msg="TearDown network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\" successfully" Dec 13 13:28:52.234008 containerd[1511]: time="2024-12-13T13:28:52.233993158Z" level=info msg="StopPodSandbox for \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\" returns successfully" Dec 13 13:28:52.235444 containerd[1511]: time="2024-12-13T13:28:52.234728594Z" level=info msg="StopPodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\"" Dec 13 13:28:52.235444 containerd[1511]: time="2024-12-13T13:28:52.235082137Z" level=info msg="TearDown network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" successfully" Dec 13 13:28:52.235444 containerd[1511]: time="2024-12-13T13:28:52.235103306Z" level=info msg="StopPodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" returns successfully" Dec 13 13:28:52.235614 kubelet[1929]: I1213 13:28:52.235163 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1" Dec 13 13:28:52.236443 containerd[1511]: time="2024-12-13T13:28:52.236050293Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\"" Dec 13 13:28:52.236443 containerd[1511]: time="2024-12-13T13:28:52.236170201Z" level=info msg="TearDown network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" successfully" Dec 13 13:28:52.236443 containerd[1511]: time="2024-12-13T13:28:52.236210577Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" returns successfully" Dec 13 13:28:52.238539 containerd[1511]: time="2024-12-13T13:28:52.237292051Z" level=info msg="StopPodSandbox for \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\"" Dec 13 13:28:52.238539 containerd[1511]: time="2024-12-13T13:28:52.238109178Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" Dec 13 13:28:52.238539 containerd[1511]: time="2024-12-13T13:28:52.238114678Z" level=info msg="Ensure that sandbox e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1 in task-service has been cleanup successfully" Dec 13 13:28:52.238539 containerd[1511]: time="2024-12-13T13:28:52.238228534Z" level=info msg="TearDown network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" successfully" Dec 13 13:28:52.238539 containerd[1511]: time="2024-12-13T13:28:52.238248461Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" returns successfully" Dec 13 13:28:52.241942 containerd[1511]: time="2024-12-13T13:28:52.240670900Z" level=info msg="TearDown network for sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\" successfully" Dec 13 13:28:52.241942 containerd[1511]: time="2024-12-13T13:28:52.240704647Z" level=info msg="StopPodSandbox for \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\" returns successfully" Dec 13 13:28:52.242266 containerd[1511]: time="2024-12-13T13:28:52.242234400Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:28:52.242686 containerd[1511]: time="2024-12-13T13:28:52.242570419Z" level=info msg="TearDown network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" successfully" Dec 13 13:28:52.242686 containerd[1511]: time="2024-12-13T13:28:52.242595195Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" returns successfully" Dec 13 13:28:52.242938 systemd[1]: run-netns-cni\x2ddc771cfb\x2de53d\x2d76c6\x2d442f\x2d84b89d67aa2e.mount: Deactivated successfully. Dec 13 13:28:52.244360 containerd[1511]: time="2024-12-13T13:28:52.244332667Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:28:52.244844 containerd[1511]: time="2024-12-13T13:28:52.244630306Z" level=info msg="StopPodSandbox for \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\"" Dec 13 13:28:52.244844 containerd[1511]: time="2024-12-13T13:28:52.244664635Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:28:52.244844 containerd[1511]: time="2024-12-13T13:28:52.244764314Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:28:52.244844 containerd[1511]: time="2024-12-13T13:28:52.244781646Z" level=info msg="TearDown network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\" successfully" Dec 13 13:28:52.244844 containerd[1511]: time="2024-12-13T13:28:52.244811042Z" level=info msg="StopPodSandbox for \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\" returns successfully" Dec 13 13:28:52.245640 containerd[1511]: time="2024-12-13T13:28:52.245429709Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:28:52.245640 containerd[1511]: time="2024-12-13T13:28:52.245490905Z" level=info msg="StopPodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\"" Dec 13 13:28:52.245640 containerd[1511]: time="2024-12-13T13:28:52.245544481Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:28:52.245640 containerd[1511]: time="2024-12-13T13:28:52.245561775Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:28:52.245891 containerd[1511]: time="2024-12-13T13:28:52.245607060Z" level=info msg="TearDown network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" successfully" Dec 13 13:28:52.245891 containerd[1511]: time="2024-12-13T13:28:52.245842612Z" level=info msg="StopPodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" returns successfully" Dec 13 13:28:52.246281 containerd[1511]: time="2024-12-13T13:28:52.246237241Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\"" Dec 13 13:28:52.246365 containerd[1511]: time="2024-12-13T13:28:52.246341498Z" level=info msg="TearDown network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" successfully" Dec 13 13:28:52.246465 containerd[1511]: time="2024-12-13T13:28:52.246366767Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" returns successfully" Dec 13 13:28:52.247180 containerd[1511]: time="2024-12-13T13:28:52.246644853Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:28:52.247903 containerd[1511]: time="2024-12-13T13:28:52.247683776Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:28:52.247903 containerd[1511]: time="2024-12-13T13:28:52.247818854Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:28:52.248234 containerd[1511]: time="2024-12-13T13:28:52.248190529Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" Dec 13 13:28:52.248689 containerd[1511]: time="2024-12-13T13:28:52.248651258Z" level=info msg="TearDown network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" successfully" Dec 13 13:28:52.248689 containerd[1511]: time="2024-12-13T13:28:52.248680632Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" returns successfully" Dec 13 13:28:52.249863 containerd[1511]: time="2024-12-13T13:28:52.249412571Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:28:52.249863 containerd[1511]: time="2024-12-13T13:28:52.249446034Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:28:52.249863 containerd[1511]: time="2024-12-13T13:28:52.249527367Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:28:52.249863 containerd[1511]: time="2024-12-13T13:28:52.249545560Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:28:52.249863 containerd[1511]: time="2024-12-13T13:28:52.249554099Z" level=info msg="TearDown network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" successfully" Dec 13 13:28:52.249863 containerd[1511]: time="2024-12-13T13:28:52.249583690Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" returns successfully" Dec 13 13:28:52.250901 containerd[1511]: time="2024-12-13T13:28:52.250814028Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:28:52.251095 containerd[1511]: time="2024-12-13T13:28:52.250963634Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:28:52.251189 containerd[1511]: time="2024-12-13T13:28:52.251091362Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:28:52.251189 containerd[1511]: time="2024-12-13T13:28:52.251037470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:6,}" Dec 13 13:28:52.252232 containerd[1511]: time="2024-12-13T13:28:52.252169574Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:11,}" Dec 13 13:28:52.259464 kubelet[1929]: I1213 13:28:52.258829 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-4jn6f" podStartSLOduration=3.091691756 podStartE2EDuration="24.258778927s" podCreationTimestamp="2024-12-13 13:28:28 +0000 UTC" firstStartedPulling="2024-12-13 13:28:30.476497983 +0000 UTC m=+3.094177167" lastFinishedPulling="2024-12-13 13:28:51.643585147 +0000 UTC m=+24.261264338" observedRunningTime="2024-12-13 13:28:52.241613331 +0000 UTC m=+24.859292530" watchObservedRunningTime="2024-12-13 13:28:52.258778927 +0000 UTC m=+24.876458138" Dec 13 13:28:52.513735 systemd-networkd[1432]: cali80357a0f0f8: Link UP Dec 13 13:28:52.517169 systemd-networkd[1432]: cali80357a0f0f8: Gained carrier Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.332 [INFO][3053] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.378 [INFO][3053] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.57.46-k8s-csi--node--driver--dhlrm-eth0 csi-node-driver- calico-system d9a9389a-36c6-4bf6-9c2f-53cc83c3820e 1276 0 2024-12-13 13:28:28 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:65bf684474 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.57.46 csi-node-driver-dhlrm eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali80357a0f0f8 [] []}} ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Namespace="calico-system" Pod="csi-node-driver-dhlrm" WorkloadEndpoint="10.230.57.46-k8s-csi--node--driver--dhlrm-" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.378 [INFO][3053] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Namespace="calico-system" Pod="csi-node-driver-dhlrm" WorkloadEndpoint="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.431 [INFO][3082] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" HandleID="k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Workload="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.449 [INFO][3082] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" HandleID="k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Workload="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0000518e0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.57.46", "pod":"csi-node-driver-dhlrm", "timestamp":"2024-12-13 13:28:52.431276991 +0000 UTC"}, Hostname:"10.230.57.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.450 [INFO][3082] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.450 [INFO][3082] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.450 [INFO][3082] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.57.46' Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.453 [INFO][3082] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.460 [INFO][3082] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.466 [INFO][3082] ipam/ipam.go 489: Trying affinity for 192.168.20.0/26 host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.469 [INFO][3082] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.0/26 host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.475 [INFO][3082] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.475 [INFO][3082] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.478 [INFO][3082] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3 Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.485 [INFO][3082] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.493 [INFO][3082] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.1/26] block=192.168.20.0/26 handle="k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.493 [INFO][3082] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.1/26] handle="k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" host="10.230.57.46" Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.493 [INFO][3082] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:28:52.551084 containerd[1511]: 2024-12-13 13:28:52.493 [INFO][3082] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.1/26] IPv6=[] ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" HandleID="k8s-pod-network.044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Workload="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" Dec 13 13:28:52.552254 containerd[1511]: 2024-12-13 13:28:52.497 [INFO][3053] cni-plugin/k8s.go 386: Populated endpoint ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Namespace="calico-system" Pod="csi-node-driver-dhlrm" WorkloadEndpoint="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.57.46-k8s-csi--node--driver--dhlrm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9a9389a-36c6-4bf6-9c2f-53cc83c3820e", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.57.46", ContainerID:"", Pod:"csi-node-driver-dhlrm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80357a0f0f8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:52.552254 containerd[1511]: 2024-12-13 13:28:52.498 [INFO][3053] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.1/32] ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Namespace="calico-system" Pod="csi-node-driver-dhlrm" WorkloadEndpoint="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" Dec 13 13:28:52.552254 containerd[1511]: 2024-12-13 13:28:52.498 [INFO][3053] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali80357a0f0f8 ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Namespace="calico-system" Pod="csi-node-driver-dhlrm" WorkloadEndpoint="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" Dec 13 13:28:52.552254 containerd[1511]: 2024-12-13 13:28:52.518 [INFO][3053] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Namespace="calico-system" Pod="csi-node-driver-dhlrm" WorkloadEndpoint="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" Dec 13 13:28:52.552254 containerd[1511]: 2024-12-13 13:28:52.518 [INFO][3053] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Namespace="calico-system" Pod="csi-node-driver-dhlrm" WorkloadEndpoint="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.57.46-k8s-csi--node--driver--dhlrm-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9a9389a-36c6-4bf6-9c2f-53cc83c3820e", ResourceVersion:"1276", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"65bf684474", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.57.46", ContainerID:"044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3", Pod:"csi-node-driver-dhlrm", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.20.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali80357a0f0f8", MAC:"f6:89:39:4c:34:6d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:52.552254 containerd[1511]: 2024-12-13 13:28:52.548 [INFO][3053] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3" Namespace="calico-system" Pod="csi-node-driver-dhlrm" WorkloadEndpoint="10.230.57.46-k8s-csi--node--driver--dhlrm-eth0" Dec 13 13:28:52.587043 containerd[1511]: time="2024-12-13T13:28:52.585988372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:52.587294 containerd[1511]: time="2024-12-13T13:28:52.586999000Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:52.587572 containerd[1511]: time="2024-12-13T13:28:52.587146612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:52.587863 containerd[1511]: time="2024-12-13T13:28:52.587754818Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:52.600481 systemd-networkd[1432]: calia9d6444be3b: Link UP Dec 13 13:28:52.600864 systemd-networkd[1432]: calia9d6444be3b: Gained carrier Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.331 [INFO][3048] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.378 [INFO][3048] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0 nginx-deployment-85f456d6dd- default 0f4afe90-15b3-45bc-a79d-fc560555cfb7 1369 0 2024-12-13 13:28:46 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.57.46 nginx-deployment-85f456d6dd-xbbxq eth0 default [] [] [kns.default ksa.default.default] calia9d6444be3b [] []}} ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Namespace="default" Pod="nginx-deployment-85f456d6dd-xbbxq" WorkloadEndpoint="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.378 [INFO][3048] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Namespace="default" Pod="nginx-deployment-85f456d6dd-xbbxq" WorkloadEndpoint="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.429 [INFO][3086] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" HandleID="k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Workload="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.451 [INFO][3086] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" HandleID="k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Workload="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319370), Attrs:map[string]string{"namespace":"default", "node":"10.230.57.46", "pod":"nginx-deployment-85f456d6dd-xbbxq", "timestamp":"2024-12-13 13:28:52.428979331 +0000 UTC"}, Hostname:"10.230.57.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.451 [INFO][3086] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.493 [INFO][3086] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.494 [INFO][3086] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.57.46' Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.497 [INFO][3086] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.503 [INFO][3086] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.511 [INFO][3086] ipam/ipam.go 489: Trying affinity for 192.168.20.0/26 host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.517 [INFO][3086] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.0/26 host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.525 [INFO][3086] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.525 [INFO][3086] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.530 [INFO][3086] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36 Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.556 [INFO][3086] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.591 [INFO][3086] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.2/26] block=192.168.20.0/26 handle="k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.591 [INFO][3086] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.2/26] handle="k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" host="10.230.57.46" Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.591 [INFO][3086] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:28:52.619218 containerd[1511]: 2024-12-13 13:28:52.591 [INFO][3086] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.2/26] IPv6=[] ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" HandleID="k8s-pod-network.4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Workload="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" Dec 13 13:28:52.622068 containerd[1511]: 2024-12-13 13:28:52.593 [INFO][3048] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Namespace="default" Pod="nginx-deployment-85f456d6dd-xbbxq" WorkloadEndpoint="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"0f4afe90-15b3-45bc-a79d-fc560555cfb7", ResourceVersion:"1369", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.57.46", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-xbbxq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia9d6444be3b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:52.622068 containerd[1511]: 2024-12-13 13:28:52.593 [INFO][3048] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.2/32] ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Namespace="default" Pod="nginx-deployment-85f456d6dd-xbbxq" WorkloadEndpoint="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" Dec 13 13:28:52.622068 containerd[1511]: 2024-12-13 13:28:52.594 [INFO][3048] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia9d6444be3b ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Namespace="default" Pod="nginx-deployment-85f456d6dd-xbbxq" WorkloadEndpoint="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" Dec 13 13:28:52.622068 containerd[1511]: 2024-12-13 13:28:52.599 [INFO][3048] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Namespace="default" Pod="nginx-deployment-85f456d6dd-xbbxq" WorkloadEndpoint="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" Dec 13 13:28:52.622068 containerd[1511]: 2024-12-13 13:28:52.599 [INFO][3048] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Namespace="default" Pod="nginx-deployment-85f456d6dd-xbbxq" WorkloadEndpoint="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"0f4afe90-15b3-45bc-a79d-fc560555cfb7", ResourceVersion:"1369", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 28, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.57.46", ContainerID:"4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36", Pod:"nginx-deployment-85f456d6dd-xbbxq", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calia9d6444be3b", MAC:"be:b1:4e:c9:db:3b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:28:52.622068 containerd[1511]: 2024-12-13 13:28:52.611 [INFO][3048] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36" Namespace="default" Pod="nginx-deployment-85f456d6dd-xbbxq" WorkloadEndpoint="10.230.57.46-k8s-nginx--deployment--85f456d6dd--xbbxq-eth0" Dec 13 13:28:52.630614 systemd[1]: Started cri-containerd-044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3.scope - libcontainer container 044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3. Dec 13 13:28:52.681958 containerd[1511]: time="2024-12-13T13:28:52.681614064Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:28:52.681958 containerd[1511]: time="2024-12-13T13:28:52.681702404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:28:52.681958 containerd[1511]: time="2024-12-13T13:28:52.681721570Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:52.682436 containerd[1511]: time="2024-12-13T13:28:52.681997423Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:28:52.690109 containerd[1511]: time="2024-12-13T13:28:52.689945133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dhlrm,Uid:d9a9389a-36c6-4bf6-9c2f-53cc83c3820e,Namespace:calico-system,Attempt:11,} returns sandbox id \"044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3\"" Dec 13 13:28:52.693079 containerd[1511]: time="2024-12-13T13:28:52.693016030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Dec 13 13:28:52.714600 systemd[1]: Started cri-containerd-4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36.scope - libcontainer container 4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36. Dec 13 13:28:52.779234 containerd[1511]: time="2024-12-13T13:28:52.778891005Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-xbbxq,Uid:0f4afe90-15b3-45bc-a79d-fc560555cfb7,Namespace:default,Attempt:6,} returns sandbox id \"4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36\"" Dec 13 13:28:52.851642 kubelet[1929]: E1213 13:28:52.851541 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:53.278736 systemd[1]: run-containerd-runc-k8s.io-5256e267445d264a9efabb8dfc63fcdfcdf0246d339bb5ce6d89aa1b91617055-runc.NOz9EY.mount: Deactivated successfully. Dec 13 13:28:53.814577 kernel: bpftool[3386]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Dec 13 13:28:53.852435 kubelet[1929]: E1213 13:28:53.852183 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:54.220810 systemd-networkd[1432]: vxlan.calico: Link UP Dec 13 13:28:54.220827 systemd-networkd[1432]: vxlan.calico: Gained carrier Dec 13 13:28:54.237541 systemd-networkd[1432]: calia9d6444be3b: Gained IPv6LL Dec 13 13:28:54.297890 systemd-networkd[1432]: cali80357a0f0f8: Gained IPv6LL Dec 13 13:28:54.346208 systemd[1]: run-containerd-runc-k8s.io-5256e267445d264a9efabb8dfc63fcdfcdf0246d339bb5ce6d89aa1b91617055-runc.G0BCy5.mount: Deactivated successfully. Dec 13 13:28:54.502229 containerd[1511]: time="2024-12-13T13:28:54.501933024Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:54.506430 containerd[1511]: time="2024-12-13T13:28:54.504827551Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Dec 13 13:28:54.506430 containerd[1511]: time="2024-12-13T13:28:54.505926709Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:54.512202 containerd[1511]: time="2024-12-13T13:28:54.512148945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:28:54.515512 containerd[1511]: time="2024-12-13T13:28:54.515470601Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.822367326s" Dec 13 13:28:54.515607 containerd[1511]: time="2024-12-13T13:28:54.515515455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Dec 13 13:28:54.519477 containerd[1511]: time="2024-12-13T13:28:54.519437962Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:28:54.526937 containerd[1511]: time="2024-12-13T13:28:54.526887312Z" level=info msg="CreateContainer within sandbox \"044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Dec 13 13:28:54.579252 containerd[1511]: time="2024-12-13T13:28:54.579185925Z" level=info msg="CreateContainer within sandbox \"044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"9c164f59e21eece261a6a022c6667511b4d01d2b004eec8b15c1148876c735f2\"" Dec 13 13:28:54.582436 containerd[1511]: time="2024-12-13T13:28:54.580952936Z" level=info msg="StartContainer for \"9c164f59e21eece261a6a022c6667511b4d01d2b004eec8b15c1148876c735f2\"" Dec 13 13:28:54.646921 systemd[1]: Started cri-containerd-9c164f59e21eece261a6a022c6667511b4d01d2b004eec8b15c1148876c735f2.scope - libcontainer container 9c164f59e21eece261a6a022c6667511b4d01d2b004eec8b15c1148876c735f2. Dec 13 13:28:54.728654 containerd[1511]: time="2024-12-13T13:28:54.728595604Z" level=info msg="StartContainer for \"9c164f59e21eece261a6a022c6667511b4d01d2b004eec8b15c1148876c735f2\" returns successfully" Dec 13 13:28:54.853436 kubelet[1929]: E1213 13:28:54.853335 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:55.336519 systemd[1]: run-containerd-runc-k8s.io-9c164f59e21eece261a6a022c6667511b4d01d2b004eec8b15c1148876c735f2-runc.U2RFyh.mount: Deactivated successfully. Dec 13 13:28:55.854117 kubelet[1929]: E1213 13:28:55.854033 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:56.087069 systemd-networkd[1432]: vxlan.calico: Gained IPv6LL Dec 13 13:28:56.855533 kubelet[1929]: E1213 13:28:56.855418 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:57.858932 kubelet[1929]: E1213 13:28:57.858491 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:58.859132 kubelet[1929]: E1213 13:28:58.858999 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:28:58.864447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1762802251.mount: Deactivated successfully. Dec 13 13:28:59.859199 kubelet[1929]: E1213 13:28:59.859149 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:00.770300 containerd[1511]: time="2024-12-13T13:29:00.770086425Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:00.772258 containerd[1511]: time="2024-12-13T13:29:00.771961848Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71036027" Dec 13 13:29:00.773100 containerd[1511]: time="2024-12-13T13:29:00.773018421Z" level=info msg="ImageCreate event name:\"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:00.776986 containerd[1511]: time="2024-12-13T13:29:00.776899342Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:00.778704 containerd[1511]: time="2024-12-13T13:29:00.778491504Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 6.258998036s" Dec 13 13:29:00.778704 containerd[1511]: time="2024-12-13T13:29:00.778545150Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:29:00.781021 containerd[1511]: time="2024-12-13T13:29:00.780986290Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Dec 13 13:29:00.792012 containerd[1511]: time="2024-12-13T13:29:00.791959097Z" level=info msg="CreateContainer within sandbox \"4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Dec 13 13:29:00.828457 containerd[1511]: time="2024-12-13T13:29:00.827427407Z" level=info msg="CreateContainer within sandbox \"4ccf741465fbd3d43e351ad6596e36cb69d85a016283fd4953ec8b116a1e3e36\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"f82f1bc416310ae1c02f07775efeed10538f578d446e06a854e52e1604291126\"" Dec 13 13:29:00.829731 containerd[1511]: time="2024-12-13T13:29:00.829682103Z" level=info msg="StartContainer for \"f82f1bc416310ae1c02f07775efeed10538f578d446e06a854e52e1604291126\"" Dec 13 13:29:00.860550 kubelet[1929]: E1213 13:29:00.860505 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:00.891652 systemd[1]: Started cri-containerd-f82f1bc416310ae1c02f07775efeed10538f578d446e06a854e52e1604291126.scope - libcontainer container f82f1bc416310ae1c02f07775efeed10538f578d446e06a854e52e1604291126. Dec 13 13:29:00.937806 containerd[1511]: time="2024-12-13T13:29:00.937718703Z" level=info msg="StartContainer for \"f82f1bc416310ae1c02f07775efeed10538f578d446e06a854e52e1604291126\" returns successfully" Dec 13 13:29:01.317080 kubelet[1929]: I1213 13:29:01.316873 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-xbbxq" podStartSLOduration=7.318021287 podStartE2EDuration="15.316827042s" podCreationTimestamp="2024-12-13 13:28:46 +0000 UTC" firstStartedPulling="2024-12-13 13:28:52.781133663 +0000 UTC m=+25.398812848" lastFinishedPulling="2024-12-13 13:29:00.779939413 +0000 UTC m=+33.397618603" observedRunningTime="2024-12-13 13:29:01.315889378 +0000 UTC m=+33.933568606" watchObservedRunningTime="2024-12-13 13:29:01.316827042 +0000 UTC m=+33.934506234" Dec 13 13:29:01.862025 kubelet[1929]: E1213 13:29:01.861958 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:02.564190 containerd[1511]: time="2024-12-13T13:29:02.564015472Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:02.565665 containerd[1511]: time="2024-12-13T13:29:02.565593292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Dec 13 13:29:02.566770 containerd[1511]: time="2024-12-13T13:29:02.566679085Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:02.572300 containerd[1511]: time="2024-12-13T13:29:02.572229548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:02.574043 containerd[1511]: time="2024-12-13T13:29:02.573279689Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.792238464s" Dec 13 13:29:02.574043 containerd[1511]: time="2024-12-13T13:29:02.573336826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Dec 13 13:29:02.577524 containerd[1511]: time="2024-12-13T13:29:02.577479773Z" level=info msg="CreateContainer within sandbox \"044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Dec 13 13:29:02.599855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2368019391.mount: Deactivated successfully. Dec 13 13:29:02.601558 containerd[1511]: time="2024-12-13T13:29:02.600940821Z" level=info msg="CreateContainer within sandbox \"044e7e2f0ce4ebbddfd0b57c8a60979d206cb342c91dd8d541fe8b232ac345e3\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"5bc2c57364a3d9cc74ca41ccd5cf6adb3d89aba603c5bc5753fa8b1711ab0005\"" Dec 13 13:29:02.602054 containerd[1511]: time="2024-12-13T13:29:02.602013309Z" level=info msg="StartContainer for \"5bc2c57364a3d9cc74ca41ccd5cf6adb3d89aba603c5bc5753fa8b1711ab0005\"" Dec 13 13:29:02.645278 systemd[1]: run-containerd-runc-k8s.io-5bc2c57364a3d9cc74ca41ccd5cf6adb3d89aba603c5bc5753fa8b1711ab0005-runc.Yug5SO.mount: Deactivated successfully. Dec 13 13:29:02.654582 systemd[1]: Started cri-containerd-5bc2c57364a3d9cc74ca41ccd5cf6adb3d89aba603c5bc5753fa8b1711ab0005.scope - libcontainer container 5bc2c57364a3d9cc74ca41ccd5cf6adb3d89aba603c5bc5753fa8b1711ab0005. Dec 13 13:29:02.718843 containerd[1511]: time="2024-12-13T13:29:02.718782172Z" level=info msg="StartContainer for \"5bc2c57364a3d9cc74ca41ccd5cf6adb3d89aba603c5bc5753fa8b1711ab0005\" returns successfully" Dec 13 13:29:02.862634 kubelet[1929]: E1213 13:29:02.862535 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:02.981364 kubelet[1929]: I1213 13:29:02.981051 1929 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Dec 13 13:29:02.981364 kubelet[1929]: I1213 13:29:02.981146 1929 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Dec 13 13:29:03.343550 kubelet[1929]: I1213 13:29:03.343235 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dhlrm" podStartSLOduration=25.460803851 podStartE2EDuration="35.343196004s" podCreationTimestamp="2024-12-13 13:28:28 +0000 UTC" firstStartedPulling="2024-12-13 13:28:52.692344795 +0000 UTC m=+25.310023992" lastFinishedPulling="2024-12-13 13:29:02.574736948 +0000 UTC m=+35.192416145" observedRunningTime="2024-12-13 13:29:03.341531615 +0000 UTC m=+35.959210820" watchObservedRunningTime="2024-12-13 13:29:03.343196004 +0000 UTC m=+35.960875209" Dec 13 13:29:03.863164 kubelet[1929]: E1213 13:29:03.863041 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:04.863635 kubelet[1929]: E1213 13:29:04.863545 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:05.863900 kubelet[1929]: E1213 13:29:05.863808 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:06.865075 kubelet[1929]: E1213 13:29:06.864979 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:07.827185 kubelet[1929]: E1213 13:29:07.827100 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:07.865914 kubelet[1929]: E1213 13:29:07.865871 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:08.024325 kubelet[1929]: I1213 13:29:08.024254 1929 topology_manager.go:215] "Topology Admit Handler" podUID="7c5ce348-bce5-4b6e-ac6b-b8103aace0a7" podNamespace="default" podName="nfs-server-provisioner-0" Dec 13 13:29:08.040129 systemd[1]: Created slice kubepods-besteffort-pod7c5ce348_bce5_4b6e_ac6b_b8103aace0a7.slice - libcontainer container kubepods-besteffort-pod7c5ce348_bce5_4b6e_ac6b_b8103aace0a7.slice. Dec 13 13:29:08.056782 kubelet[1929]: I1213 13:29:08.056678 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/7c5ce348-bce5-4b6e-ac6b-b8103aace0a7-data\") pod \"nfs-server-provisioner-0\" (UID: \"7c5ce348-bce5-4b6e-ac6b-b8103aace0a7\") " pod="default/nfs-server-provisioner-0" Dec 13 13:29:08.056782 kubelet[1929]: I1213 13:29:08.056738 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45522\" (UniqueName: \"kubernetes.io/projected/7c5ce348-bce5-4b6e-ac6b-b8103aace0a7-kube-api-access-45522\") pod \"nfs-server-provisioner-0\" (UID: \"7c5ce348-bce5-4b6e-ac6b-b8103aace0a7\") " pod="default/nfs-server-provisioner-0" Dec 13 13:29:08.344872 containerd[1511]: time="2024-12-13T13:29:08.344716956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7c5ce348-bce5-4b6e-ac6b-b8103aace0a7,Namespace:default,Attempt:0,}" Dec 13 13:29:08.552035 systemd-networkd[1432]: cali60e51b789ff: Link UP Dec 13 13:29:08.552926 systemd-networkd[1432]: cali60e51b789ff: Gained carrier Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.432 [INFO][3687] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.57.46-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 7c5ce348-bce5-4b6e-ac6b-b8103aace0a7 1484 0 2024-12-13 13:29:08 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.57.46 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.57.46-k8s-nfs--server--provisioner--0-" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.433 [INFO][3687] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.478 [INFO][3697] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" HandleID="k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Workload="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.500 [INFO][3697] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" HandleID="k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Workload="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319570), Attrs:map[string]string{"namespace":"default", "node":"10.230.57.46", "pod":"nfs-server-provisioner-0", "timestamp":"2024-12-13 13:29:08.478837848 +0000 UTC"}, Hostname:"10.230.57.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.501 [INFO][3697] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.501 [INFO][3697] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.501 [INFO][3697] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.57.46' Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.506 [INFO][3697] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.513 [INFO][3697] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.519 [INFO][3697] ipam/ipam.go 489: Trying affinity for 192.168.20.0/26 host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.522 [INFO][3697] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.0/26 host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.526 [INFO][3697] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.526 [INFO][3697] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.528 [INFO][3697] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48 Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.535 [INFO][3697] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.544 [INFO][3697] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.3/26] block=192.168.20.0/26 handle="k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.544 [INFO][3697] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.3/26] handle="k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" host="10.230.57.46" Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.544 [INFO][3697] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:29:08.568478 containerd[1511]: 2024-12-13 13:29:08.544 [INFO][3697] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.3/26] IPv6=[] ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" HandleID="k8s-pod-network.a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Workload="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:08.569696 containerd[1511]: 2024-12-13 13:29:08.546 [INFO][3687] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.57.46-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7c5ce348-bce5-4b6e-ac6b-b8103aace0a7", ResourceVersion:"1484", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.57.46", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.20.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:08.569696 containerd[1511]: 2024-12-13 13:29:08.547 [INFO][3687] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.3/32] ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:08.569696 containerd[1511]: 2024-12-13 13:29:08.547 [INFO][3687] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:08.569696 containerd[1511]: 2024-12-13 13:29:08.552 [INFO][3687] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:08.570003 containerd[1511]: 2024-12-13 13:29:08.553 [INFO][3687] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.57.46-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"7c5ce348-bce5-4b6e-ac6b-b8103aace0a7", ResourceVersion:"1484", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 29, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.57.46", ContainerID:"a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.20.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"f2:9b:3b:95:db:03", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:08.570003 containerd[1511]: 2024-12-13 13:29:08.564 [INFO][3687] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.57.46-k8s-nfs--server--provisioner--0-eth0" Dec 13 13:29:08.661449 containerd[1511]: time="2024-12-13T13:29:08.659053644Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:08.661449 containerd[1511]: time="2024-12-13T13:29:08.659163418Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:08.661449 containerd[1511]: time="2024-12-13T13:29:08.659196134Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:08.661449 containerd[1511]: time="2024-12-13T13:29:08.659343366Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:08.695653 systemd[1]: Started cri-containerd-a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48.scope - libcontainer container a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48. Dec 13 13:29:08.762974 containerd[1511]: time="2024-12-13T13:29:08.762899820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:7c5ce348-bce5-4b6e-ac6b-b8103aace0a7,Namespace:default,Attempt:0,} returns sandbox id \"a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48\"" Dec 13 13:29:08.792485 containerd[1511]: time="2024-12-13T13:29:08.792403481Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Dec 13 13:29:08.866499 kubelet[1929]: E1213 13:29:08.866415 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:09.866826 kubelet[1929]: E1213 13:29:09.866754 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:10.167526 systemd-networkd[1432]: cali60e51b789ff: Gained IPv6LL Dec 13 13:29:10.868101 kubelet[1929]: E1213 13:29:10.867930 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:11.868519 kubelet[1929]: E1213 13:29:11.868404 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:12.056993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1103407852.mount: Deactivated successfully. Dec 13 13:29:12.869967 kubelet[1929]: E1213 13:29:12.869637 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:13.870959 kubelet[1929]: E1213 13:29:13.870853 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:14.871976 kubelet[1929]: E1213 13:29:14.871904 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:15.311801 containerd[1511]: time="2024-12-13T13:29:15.310172842Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:15.319688 containerd[1511]: time="2024-12-13T13:29:15.319597219Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Dec 13 13:29:15.328548 containerd[1511]: time="2024-12-13T13:29:15.328502419Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:15.333059 containerd[1511]: time="2024-12-13T13:29:15.332988650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:15.335666 containerd[1511]: time="2024-12-13T13:29:15.334661430Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.542194989s" Dec 13 13:29:15.335666 containerd[1511]: time="2024-12-13T13:29:15.334728681Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Dec 13 13:29:15.339583 containerd[1511]: time="2024-12-13T13:29:15.339549063Z" level=info msg="CreateContainer within sandbox \"a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Dec 13 13:29:15.360661 containerd[1511]: time="2024-12-13T13:29:15.360610324Z" level=info msg="CreateContainer within sandbox \"a1e31eac24c7172cb158a9af73e0a00096dce95905a57bfc2bbd43f33b0afe48\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"74e43b4cbad6cdf6a44e993c367253b0f5d65630f83feb861fb96056b31603ca\"" Dec 13 13:29:15.362407 containerd[1511]: time="2024-12-13T13:29:15.361639059Z" level=info msg="StartContainer for \"74e43b4cbad6cdf6a44e993c367253b0f5d65630f83feb861fb96056b31603ca\"" Dec 13 13:29:15.415836 systemd[1]: run-containerd-runc-k8s.io-74e43b4cbad6cdf6a44e993c367253b0f5d65630f83feb861fb96056b31603ca-runc.pYq1sY.mount: Deactivated successfully. Dec 13 13:29:15.425690 systemd[1]: Started cri-containerd-74e43b4cbad6cdf6a44e993c367253b0f5d65630f83feb861fb96056b31603ca.scope - libcontainer container 74e43b4cbad6cdf6a44e993c367253b0f5d65630f83feb861fb96056b31603ca. Dec 13 13:29:15.474867 containerd[1511]: time="2024-12-13T13:29:15.474783728Z" level=info msg="StartContainer for \"74e43b4cbad6cdf6a44e993c367253b0f5d65630f83feb861fb96056b31603ca\" returns successfully" Dec 13 13:29:15.872793 kubelet[1929]: E1213 13:29:15.872674 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:16.395813 kubelet[1929]: I1213 13:29:16.395687 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.850609778 podStartE2EDuration="8.395629758s" podCreationTimestamp="2024-12-13 13:29:08 +0000 UTC" firstStartedPulling="2024-12-13 13:29:08.791507476 +0000 UTC m=+41.409186661" lastFinishedPulling="2024-12-13 13:29:15.336527451 +0000 UTC m=+47.954206641" observedRunningTime="2024-12-13 13:29:16.393593194 +0000 UTC m=+49.011272412" watchObservedRunningTime="2024-12-13 13:29:16.395629758 +0000 UTC m=+49.013308946" Dec 13 13:29:16.873770 kubelet[1929]: E1213 13:29:16.873706 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:17.875015 kubelet[1929]: E1213 13:29:17.874900 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:18.875225 kubelet[1929]: E1213 13:29:18.875114 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:19.876569 kubelet[1929]: E1213 13:29:19.876351 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:20.876715 kubelet[1929]: E1213 13:29:20.876639 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:21.878248 kubelet[1929]: E1213 13:29:21.878111 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:22.810267 systemd[1]: run-containerd-runc-k8s.io-5256e267445d264a9efabb8dfc63fcdfcdf0246d339bb5ce6d89aa1b91617055-runc.tZNtcY.mount: Deactivated successfully. Dec 13 13:29:22.878967 kubelet[1929]: E1213 13:29:22.878882 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:23.879214 kubelet[1929]: E1213 13:29:23.879099 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:24.879517 kubelet[1929]: E1213 13:29:24.879301 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:25.217850 kubelet[1929]: I1213 13:29:25.216583 1929 topology_manager.go:215] "Topology Admit Handler" podUID="b03a52a2-aa13-4baf-bed3-60ca20d3e483" podNamespace="default" podName="test-pod-1" Dec 13 13:29:25.228686 systemd[1]: Created slice kubepods-besteffort-podb03a52a2_aa13_4baf_bed3_60ca20d3e483.slice - libcontainer container kubepods-besteffort-podb03a52a2_aa13_4baf_bed3_60ca20d3e483.slice. Dec 13 13:29:25.387120 kubelet[1929]: I1213 13:29:25.387010 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2mx6\" (UniqueName: \"kubernetes.io/projected/b03a52a2-aa13-4baf-bed3-60ca20d3e483-kube-api-access-g2mx6\") pod \"test-pod-1\" (UID: \"b03a52a2-aa13-4baf-bed3-60ca20d3e483\") " pod="default/test-pod-1" Dec 13 13:29:25.387120 kubelet[1929]: I1213 13:29:25.387108 1929 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-6388ae60-926a-47b3-87e0-147c975fc62d\" (UniqueName: \"kubernetes.io/nfs/b03a52a2-aa13-4baf-bed3-60ca20d3e483-pvc-6388ae60-926a-47b3-87e0-147c975fc62d\") pod \"test-pod-1\" (UID: \"b03a52a2-aa13-4baf-bed3-60ca20d3e483\") " pod="default/test-pod-1" Dec 13 13:29:25.540433 kernel: FS-Cache: Loaded Dec 13 13:29:25.623026 kernel: RPC: Registered named UNIX socket transport module. Dec 13 13:29:25.623973 kernel: RPC: Registered udp transport module. Dec 13 13:29:25.624942 kernel: RPC: Registered tcp transport module. Dec 13 13:29:25.625028 kernel: RPC: Registered tcp-with-tls transport module. Dec 13 13:29:25.625779 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Dec 13 13:29:25.880430 kubelet[1929]: E1213 13:29:25.879683 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:25.910527 kernel: NFS: Registering the id_resolver key type Dec 13 13:29:25.910647 kernel: Key type id_resolver registered Dec 13 13:29:25.910706 kernel: Key type id_legacy registered Dec 13 13:29:25.966580 nfsidmap[3906]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 13:29:25.975967 nfsidmap[3909]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Dec 13 13:29:26.135896 containerd[1511]: time="2024-12-13T13:29:26.134204868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b03a52a2-aa13-4baf-bed3-60ca20d3e483,Namespace:default,Attempt:0,}" Dec 13 13:29:26.372262 systemd-networkd[1432]: cali5ec59c6bf6e: Link UP Dec 13 13:29:26.372779 systemd-networkd[1432]: cali5ec59c6bf6e: Gained carrier Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.222 [INFO][3912] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.57.46-k8s-test--pod--1-eth0 default b03a52a2-aa13-4baf-bed3-60ca20d3e483 1551 0 2024-12-13 13:29:10 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.57.46 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.57.46-k8s-test--pod--1-" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.225 [INFO][3912] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.57.46-k8s-test--pod--1-eth0" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.285 [INFO][3923] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" HandleID="k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Workload="10.230.57.46-k8s-test--pod--1-eth0" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.305 [INFO][3923] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" HandleID="k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Workload="10.230.57.46-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000338d90), Attrs:map[string]string{"namespace":"default", "node":"10.230.57.46", "pod":"test-pod-1", "timestamp":"2024-12-13 13:29:26.285832596 +0000 UTC"}, Hostname:"10.230.57.46", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.306 [INFO][3923] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.306 [INFO][3923] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.306 [INFO][3923] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.57.46' Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.311 [INFO][3923] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.322 [INFO][3923] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.331 [INFO][3923] ipam/ipam.go 489: Trying affinity for 192.168.20.0/26 host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.334 [INFO][3923] ipam/ipam.go 155: Attempting to load block cidr=192.168.20.0/26 host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.339 [INFO][3923] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.20.0/26 host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.339 [INFO][3923] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.20.0/26 handle="k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.342 [INFO][3923] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.351 [INFO][3923] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.20.0/26 handle="k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.362 [INFO][3923] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.20.4/26] block=192.168.20.0/26 handle="k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.362 [INFO][3923] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.20.4/26] handle="k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" host="10.230.57.46" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.362 [INFO][3923] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.362 [INFO][3923] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.20.4/26] IPv6=[] ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" HandleID="k8s-pod-network.8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Workload="10.230.57.46-k8s-test--pod--1-eth0" Dec 13 13:29:26.387539 containerd[1511]: 2024-12-13 13:29:26.365 [INFO][3912] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.57.46-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.57.46-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"b03a52a2-aa13-4baf-bed3-60ca20d3e483", ResourceVersion:"1551", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.57.46", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:26.394109 containerd[1511]: 2024-12-13 13:29:26.365 [INFO][3912] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.20.4/32] ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.57.46-k8s-test--pod--1-eth0" Dec 13 13:29:26.394109 containerd[1511]: 2024-12-13 13:29:26.365 [INFO][3912] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.57.46-k8s-test--pod--1-eth0" Dec 13 13:29:26.394109 containerd[1511]: 2024-12-13 13:29:26.371 [INFO][3912] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.57.46-k8s-test--pod--1-eth0" Dec 13 13:29:26.394109 containerd[1511]: 2024-12-13 13:29:26.372 [INFO][3912] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.57.46-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.57.46-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"b03a52a2-aa13-4baf-bed3-60ca20d3e483", ResourceVersion:"1551", Generation:0, CreationTimestamp:time.Date(2024, time.December, 13, 13, 29, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.57.46", ContainerID:"8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.20.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"e2:41:ed:cc:28:dc", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Dec 13 13:29:26.394109 containerd[1511]: 2024-12-13 13:29:26.383 [INFO][3912] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.57.46-k8s-test--pod--1-eth0" Dec 13 13:29:26.444002 containerd[1511]: time="2024-12-13T13:29:26.443720431Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Dec 13 13:29:26.444002 containerd[1511]: time="2024-12-13T13:29:26.443887769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Dec 13 13:29:26.444002 containerd[1511]: time="2024-12-13T13:29:26.443913740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:26.444861 containerd[1511]: time="2024-12-13T13:29:26.444099302Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Dec 13 13:29:26.473838 systemd[1]: Started cri-containerd-8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd.scope - libcontainer container 8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd. Dec 13 13:29:26.548432 containerd[1511]: time="2024-12-13T13:29:26.548273054Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:b03a52a2-aa13-4baf-bed3-60ca20d3e483,Namespace:default,Attempt:0,} returns sandbox id \"8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd\"" Dec 13 13:29:26.552749 containerd[1511]: time="2024-12-13T13:29:26.552605826Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Dec 13 13:29:26.880203 kubelet[1929]: E1213 13:29:26.880089 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:26.892980 containerd[1511]: time="2024-12-13T13:29:26.892789113Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Dec 13 13:29:26.893741 containerd[1511]: time="2024-12-13T13:29:26.893631493Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Dec 13 13:29:26.898890 containerd[1511]: time="2024-12-13T13:29:26.898685258Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:e04edf30a4ea4c5a4107110797c72d3ee8a654415f00acd4019be17218afd9a1\", size \"71035905\" in 345.912883ms" Dec 13 13:29:26.898890 containerd[1511]: time="2024-12-13T13:29:26.898727986Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fa0a8cea5e76ad962111c39c85bb312edaf5b89eccd8f404eeea66c9759641e3\"" Dec 13 13:29:26.902175 containerd[1511]: time="2024-12-13T13:29:26.902141057Z" level=info msg="CreateContainer within sandbox \"8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd\" for container &ContainerMetadata{Name:test,Attempt:0,}" Dec 13 13:29:26.931518 containerd[1511]: time="2024-12-13T13:29:26.931457866Z" level=info msg="CreateContainer within sandbox \"8132bcf6b3ca181ff03413432564fb09c1f30dd5b03c0880f7dd2ead1ba0a1cd\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"3b1bd0e0092fb47e0b655a29755a4a9ae0aa1985bd2d5e6bd5f362499bf6e71c\"" Dec 13 13:29:26.932680 containerd[1511]: time="2024-12-13T13:29:26.932629418Z" level=info msg="StartContainer for \"3b1bd0e0092fb47e0b655a29755a4a9ae0aa1985bd2d5e6bd5f362499bf6e71c\"" Dec 13 13:29:26.982601 systemd[1]: Started cri-containerd-3b1bd0e0092fb47e0b655a29755a4a9ae0aa1985bd2d5e6bd5f362499bf6e71c.scope - libcontainer container 3b1bd0e0092fb47e0b655a29755a4a9ae0aa1985bd2d5e6bd5f362499bf6e71c. Dec 13 13:29:27.033454 containerd[1511]: time="2024-12-13T13:29:27.033279252Z" level=info msg="StartContainer for \"3b1bd0e0092fb47e0b655a29755a4a9ae0aa1985bd2d5e6bd5f362499bf6e71c\" returns successfully" Dec 13 13:29:27.429184 kubelet[1929]: I1213 13:29:27.428972 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.080774421 podStartE2EDuration="17.428912176s" podCreationTimestamp="2024-12-13 13:29:10 +0000 UTC" firstStartedPulling="2024-12-13 13:29:26.551470552 +0000 UTC m=+59.169149735" lastFinishedPulling="2024-12-13 13:29:26.899608306 +0000 UTC m=+59.517287490" observedRunningTime="2024-12-13 13:29:27.427850096 +0000 UTC m=+60.045529321" watchObservedRunningTime="2024-12-13 13:29:27.428912176 +0000 UTC m=+60.046591376" Dec 13 13:29:27.510223 systemd[1]: run-containerd-runc-k8s.io-3b1bd0e0092fb47e0b655a29755a4a9ae0aa1985bd2d5e6bd5f362499bf6e71c-runc.aMKDpY.mount: Deactivated successfully. Dec 13 13:29:27.766883 systemd-networkd[1432]: cali5ec59c6bf6e: Gained IPv6LL Dec 13 13:29:27.827818 kubelet[1929]: E1213 13:29:27.827766 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:27.867387 containerd[1511]: time="2024-12-13T13:29:27.867331653Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:29:27.868319 containerd[1511]: time="2024-12-13T13:29:27.868162846Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:29:27.868319 containerd[1511]: time="2024-12-13T13:29:27.868189847Z" level=info msg="StopPodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:29:27.873232 containerd[1511]: time="2024-12-13T13:29:27.872983943Z" level=info msg="RemovePodSandbox for \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:29:27.881429 kubelet[1929]: E1213 13:29:27.880799 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:27.885921 containerd[1511]: time="2024-12-13T13:29:27.885890674Z" level=info msg="Forcibly stopping sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\"" Dec 13 13:29:27.886246 containerd[1511]: time="2024-12-13T13:29:27.886142437Z" level=info msg="TearDown network for sandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" successfully" Dec 13 13:29:27.899574 containerd[1511]: time="2024-12-13T13:29:27.899522663Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.899763 containerd[1511]: time="2024-12-13T13:29:27.899733525Z" level=info msg="RemovePodSandbox \"a61227f7a37566b960694a22ddeb2becbc8da38c71932b3cf75a4174066eb180\" returns successfully" Dec 13 13:29:27.900442 containerd[1511]: time="2024-12-13T13:29:27.900402423Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:29:27.900864 containerd[1511]: time="2024-12-13T13:29:27.900837628Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:29:27.900995 containerd[1511]: time="2024-12-13T13:29:27.900963807Z" level=info msg="StopPodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:29:27.901619 containerd[1511]: time="2024-12-13T13:29:27.901591471Z" level=info msg="RemovePodSandbox for \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:29:27.902083 containerd[1511]: time="2024-12-13T13:29:27.901857445Z" level=info msg="Forcibly stopping sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\"" Dec 13 13:29:27.902083 containerd[1511]: time="2024-12-13T13:29:27.901989895Z" level=info msg="TearDown network for sandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" successfully" Dec 13 13:29:27.933349 containerd[1511]: time="2024-12-13T13:29:27.933147060Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.933349 containerd[1511]: time="2024-12-13T13:29:27.933209848Z" level=info msg="RemovePodSandbox \"1be0200b1dcc06604d770fde094070a996dab926a13c943dad139929344c775d\" returns successfully" Dec 13 13:29:27.933902 containerd[1511]: time="2024-12-13T13:29:27.933803098Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:29:27.934251 containerd[1511]: time="2024-12-13T13:29:27.934103045Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:29:27.934251 containerd[1511]: time="2024-12-13T13:29:27.934169500Z" level=info msg="StopPodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:29:27.936233 containerd[1511]: time="2024-12-13T13:29:27.934772746Z" level=info msg="RemovePodSandbox for \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:29:27.936233 containerd[1511]: time="2024-12-13T13:29:27.934808833Z" level=info msg="Forcibly stopping sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\"" Dec 13 13:29:27.936233 containerd[1511]: time="2024-12-13T13:29:27.934934369Z" level=info msg="TearDown network for sandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" successfully" Dec 13 13:29:27.939304 containerd[1511]: time="2024-12-13T13:29:27.939269271Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.939485 containerd[1511]: time="2024-12-13T13:29:27.939455324Z" level=info msg="RemovePodSandbox \"0bf54a32b4c693d35d3a15c60c5ed7af72de82ba4ca9e1f875a317d3564ae396\" returns successfully" Dec 13 13:29:27.940096 containerd[1511]: time="2024-12-13T13:29:27.940048700Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:29:27.940184 containerd[1511]: time="2024-12-13T13:29:27.940165252Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:29:27.940264 containerd[1511]: time="2024-12-13T13:29:27.940184232Z" level=info msg="StopPodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:29:27.940931 containerd[1511]: time="2024-12-13T13:29:27.940872252Z" level=info msg="RemovePodSandbox for \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:29:27.940931 containerd[1511]: time="2024-12-13T13:29:27.940927776Z" level=info msg="Forcibly stopping sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\"" Dec 13 13:29:27.941078 containerd[1511]: time="2024-12-13T13:29:27.941039097Z" level=info msg="TearDown network for sandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" successfully" Dec 13 13:29:27.943902 containerd[1511]: time="2024-12-13T13:29:27.943851692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.944025 containerd[1511]: time="2024-12-13T13:29:27.943911332Z" level=info msg="RemovePodSandbox \"b3e76a379482abcaea3b805b9586e31607fb78966540100ba3130fb61547679c\" returns successfully" Dec 13 13:29:27.944559 containerd[1511]: time="2024-12-13T13:29:27.944351852Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:29:27.944657 containerd[1511]: time="2024-12-13T13:29:27.944532781Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:29:27.944657 containerd[1511]: time="2024-12-13T13:29:27.944591958Z" level=info msg="StopPodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:29:27.945469 containerd[1511]: time="2024-12-13T13:29:27.945051156Z" level=info msg="RemovePodSandbox for \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:29:27.945469 containerd[1511]: time="2024-12-13T13:29:27.945087412Z" level=info msg="Forcibly stopping sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\"" Dec 13 13:29:27.945469 containerd[1511]: time="2024-12-13T13:29:27.945192072Z" level=info msg="TearDown network for sandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" successfully" Dec 13 13:29:27.948320 containerd[1511]: time="2024-12-13T13:29:27.947953649Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.948320 containerd[1511]: time="2024-12-13T13:29:27.948023134Z" level=info msg="RemovePodSandbox \"9a543f875ae45cc991b7bc0fe141f0ee3869cdd6deb5212aec0da28faf666998\" returns successfully" Dec 13 13:29:27.948496 containerd[1511]: time="2024-12-13T13:29:27.948427543Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:29:27.948558 containerd[1511]: time="2024-12-13T13:29:27.948530920Z" level=info msg="TearDown network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" successfully" Dec 13 13:29:27.948558 containerd[1511]: time="2024-12-13T13:29:27.948550463Z" level=info msg="StopPodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" returns successfully" Dec 13 13:29:27.949053 containerd[1511]: time="2024-12-13T13:29:27.948942323Z" level=info msg="RemovePodSandbox for \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:29:27.949053 containerd[1511]: time="2024-12-13T13:29:27.948984951Z" level=info msg="Forcibly stopping sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\"" Dec 13 13:29:27.949214 containerd[1511]: time="2024-12-13T13:29:27.949095627Z" level=info msg="TearDown network for sandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" successfully" Dec 13 13:29:27.951490 containerd[1511]: time="2024-12-13T13:29:27.951444198Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.951587 containerd[1511]: time="2024-12-13T13:29:27.951501619Z" level=info msg="RemovePodSandbox \"7c3d8f86b9e4491808e2db8a5c789b09ddfbcabdfb430606207bafdefcfde2c5\" returns successfully" Dec 13 13:29:27.952277 containerd[1511]: time="2024-12-13T13:29:27.952194629Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" Dec 13 13:29:27.952579 containerd[1511]: time="2024-12-13T13:29:27.952307799Z" level=info msg="TearDown network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" successfully" Dec 13 13:29:27.952579 containerd[1511]: time="2024-12-13T13:29:27.952327148Z" level=info msg="StopPodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" returns successfully" Dec 13 13:29:27.953427 containerd[1511]: time="2024-12-13T13:29:27.952830800Z" level=info msg="RemovePodSandbox for \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" Dec 13 13:29:27.953427 containerd[1511]: time="2024-12-13T13:29:27.952927327Z" level=info msg="Forcibly stopping sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\"" Dec 13 13:29:27.953427 containerd[1511]: time="2024-12-13T13:29:27.953050632Z" level=info msg="TearDown network for sandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" successfully" Dec 13 13:29:27.956072 containerd[1511]: time="2024-12-13T13:29:27.955935166Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.956249 containerd[1511]: time="2024-12-13T13:29:27.956221264Z" level=info msg="RemovePodSandbox \"916517386d81374e8722f2f6a26f4538760d9f5c3c152592355fece441eae59a\" returns successfully" Dec 13 13:29:27.959920 containerd[1511]: time="2024-12-13T13:29:27.959696707Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\"" Dec 13 13:29:27.959920 containerd[1511]: time="2024-12-13T13:29:27.959821204Z" level=info msg="TearDown network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" successfully" Dec 13 13:29:27.959920 containerd[1511]: time="2024-12-13T13:29:27.959840782Z" level=info msg="StopPodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" returns successfully" Dec 13 13:29:27.961409 containerd[1511]: time="2024-12-13T13:29:27.960547930Z" level=info msg="RemovePodSandbox for \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\"" Dec 13 13:29:27.961409 containerd[1511]: time="2024-12-13T13:29:27.960580641Z" level=info msg="Forcibly stopping sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\"" Dec 13 13:29:27.961409 containerd[1511]: time="2024-12-13T13:29:27.960662152Z" level=info msg="TearDown network for sandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" successfully" Dec 13 13:29:27.965951 containerd[1511]: time="2024-12-13T13:29:27.965917726Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.966577 containerd[1511]: time="2024-12-13T13:29:27.966453796Z" level=info msg="RemovePodSandbox \"6ed27bbd8c2d1d82fd1a295f3adc36ea23dab317116433a60ecbddf8860b00bf\" returns successfully" Dec 13 13:29:27.967234 containerd[1511]: time="2024-12-13T13:29:27.967204023Z" level=info msg="StopPodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\"" Dec 13 13:29:27.969410 containerd[1511]: time="2024-12-13T13:29:27.968819633Z" level=info msg="TearDown network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" successfully" Dec 13 13:29:27.969410 containerd[1511]: time="2024-12-13T13:29:27.968847733Z" level=info msg="StopPodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" returns successfully" Dec 13 13:29:27.969410 containerd[1511]: time="2024-12-13T13:29:27.969174216Z" level=info msg="RemovePodSandbox for \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\"" Dec 13 13:29:27.969410 containerd[1511]: time="2024-12-13T13:29:27.969200739Z" level=info msg="Forcibly stopping sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\"" Dec 13 13:29:27.969410 containerd[1511]: time="2024-12-13T13:29:27.969284559Z" level=info msg="TearDown network for sandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" successfully" Dec 13 13:29:27.971793 containerd[1511]: time="2024-12-13T13:29:27.971745245Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:27.972074 containerd[1511]: time="2024-12-13T13:29:27.971800260Z" level=info msg="RemovePodSandbox \"42197c7dbc813c5d67cadd1ca51959c94290812d3697aee8f9e75e29c773c95b\" returns successfully" Dec 13 13:29:27.972504 containerd[1511]: time="2024-12-13T13:29:27.972251933Z" level=info msg="StopPodSandbox for \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\"" Dec 13 13:29:27.972504 containerd[1511]: time="2024-12-13T13:29:27.972407604Z" level=info msg="TearDown network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\" successfully" Dec 13 13:29:27.972504 containerd[1511]: time="2024-12-13T13:29:27.972430177Z" level=info msg="StopPodSandbox for \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\" returns successfully" Dec 13 13:29:27.972873 containerd[1511]: time="2024-12-13T13:29:27.972834250Z" level=info msg="RemovePodSandbox for \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\"" Dec 13 13:29:27.972958 containerd[1511]: time="2024-12-13T13:29:27.972873332Z" level=info msg="Forcibly stopping sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\"" Dec 13 13:29:27.973037 containerd[1511]: time="2024-12-13T13:29:27.972964965Z" level=info msg="TearDown network for sandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\" successfully" Dec 13 13:29:28.007744 containerd[1511]: time="2024-12-13T13:29:28.007654017Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:28.007744 containerd[1511]: time="2024-12-13T13:29:28.007726512Z" level=info msg="RemovePodSandbox \"0bae530531ea55da67ac247c565a436edd3fdc0b0801c1a4f2f426b24487a99e\" returns successfully" Dec 13 13:29:28.008686 containerd[1511]: time="2024-12-13T13:29:28.008430744Z" level=info msg="StopPodSandbox for \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\"" Dec 13 13:29:28.008686 containerd[1511]: time="2024-12-13T13:29:28.008561383Z" level=info msg="TearDown network for sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\" successfully" Dec 13 13:29:28.008686 containerd[1511]: time="2024-12-13T13:29:28.008580769Z" level=info msg="StopPodSandbox for \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\" returns successfully" Dec 13 13:29:28.010529 containerd[1511]: time="2024-12-13T13:29:28.009362890Z" level=info msg="RemovePodSandbox for \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\"" Dec 13 13:29:28.010529 containerd[1511]: time="2024-12-13T13:29:28.009424512Z" level=info msg="Forcibly stopping sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\"" Dec 13 13:29:28.010529 containerd[1511]: time="2024-12-13T13:29:28.009513546Z" level=info msg="TearDown network for sandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\" successfully" Dec 13 13:29:28.012831 containerd[1511]: time="2024-12-13T13:29:28.012788882Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:28.012996 containerd[1511]: time="2024-12-13T13:29:28.012968163Z" level=info msg="RemovePodSandbox \"50ad52cdbff79cc0f3570b9b8ee04f1ccab000cab884c9d4bcdbdee0897ee3b1\" returns successfully" Dec 13 13:29:28.013563 containerd[1511]: time="2024-12-13T13:29:28.013529592Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:29:28.013682 containerd[1511]: time="2024-12-13T13:29:28.013655374Z" level=info msg="TearDown network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" successfully" Dec 13 13:29:28.013769 containerd[1511]: time="2024-12-13T13:29:28.013689303Z" level=info msg="StopPodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" returns successfully" Dec 13 13:29:28.014568 containerd[1511]: time="2024-12-13T13:29:28.014478255Z" level=info msg="RemovePodSandbox for \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:29:28.014568 containerd[1511]: time="2024-12-13T13:29:28.014512139Z" level=info msg="Forcibly stopping sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\"" Dec 13 13:29:28.014930 containerd[1511]: time="2024-12-13T13:29:28.014598205Z" level=info msg="TearDown network for sandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" successfully" Dec 13 13:29:28.017190 containerd[1511]: time="2024-12-13T13:29:28.017147090Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:28.017302 containerd[1511]: time="2024-12-13T13:29:28.017197253Z" level=info msg="RemovePodSandbox \"591ad9151431b0c85d03c28d91438f44a06c8d65f70133b616797bc4ce33be5d\" returns successfully" Dec 13 13:29:28.018178 containerd[1511]: time="2024-12-13T13:29:28.017701794Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" Dec 13 13:29:28.018178 containerd[1511]: time="2024-12-13T13:29:28.017835495Z" level=info msg="TearDown network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" successfully" Dec 13 13:29:28.018178 containerd[1511]: time="2024-12-13T13:29:28.017853484Z" level=info msg="StopPodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" returns successfully" Dec 13 13:29:28.018784 containerd[1511]: time="2024-12-13T13:29:28.018583471Z" level=info msg="RemovePodSandbox for \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" Dec 13 13:29:28.018784 containerd[1511]: time="2024-12-13T13:29:28.018613707Z" level=info msg="Forcibly stopping sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\"" Dec 13 13:29:28.018784 containerd[1511]: time="2024-12-13T13:29:28.018700675Z" level=info msg="TearDown network for sandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" successfully" Dec 13 13:29:28.022035 containerd[1511]: time="2024-12-13T13:29:28.021752996Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:28.022035 containerd[1511]: time="2024-12-13T13:29:28.021801406Z" level=info msg="RemovePodSandbox \"7fa85c2458e079f24c352b9099024ebe935262546913c414882d3967ba803fe5\" returns successfully" Dec 13 13:29:28.022531 containerd[1511]: time="2024-12-13T13:29:28.022295925Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\"" Dec 13 13:29:28.022531 containerd[1511]: time="2024-12-13T13:29:28.022432557Z" level=info msg="TearDown network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" successfully" Dec 13 13:29:28.022531 containerd[1511]: time="2024-12-13T13:29:28.022451219Z" level=info msg="StopPodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" returns successfully" Dec 13 13:29:28.023001 containerd[1511]: time="2024-12-13T13:29:28.022798421Z" level=info msg="RemovePodSandbox for \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\"" Dec 13 13:29:28.023100 containerd[1511]: time="2024-12-13T13:29:28.023030506Z" level=info msg="Forcibly stopping sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\"" Dec 13 13:29:28.023211 containerd[1511]: time="2024-12-13T13:29:28.023124643Z" level=info msg="TearDown network for sandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" successfully" Dec 13 13:29:28.025845 containerd[1511]: time="2024-12-13T13:29:28.025804563Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:28.026022 containerd[1511]: time="2024-12-13T13:29:28.025864732Z" level=info msg="RemovePodSandbox \"df86770c8d4cabcba91e258fdd3b2dea69cbb7ed567a30f8468717dcf595d74e\" returns successfully" Dec 13 13:29:28.027131 containerd[1511]: time="2024-12-13T13:29:28.026722333Z" level=info msg="StopPodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\"" Dec 13 13:29:28.027131 containerd[1511]: time="2024-12-13T13:29:28.026851241Z" level=info msg="TearDown network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" successfully" Dec 13 13:29:28.027131 containerd[1511]: time="2024-12-13T13:29:28.026870963Z" level=info msg="StopPodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" returns successfully" Dec 13 13:29:28.028580 containerd[1511]: time="2024-12-13T13:29:28.027509559Z" level=info msg="RemovePodSandbox for \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\"" Dec 13 13:29:28.028580 containerd[1511]: time="2024-12-13T13:29:28.027548027Z" level=info msg="Forcibly stopping sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\"" Dec 13 13:29:28.028580 containerd[1511]: time="2024-12-13T13:29:28.027640570Z" level=info msg="TearDown network for sandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" successfully" Dec 13 13:29:28.030569 containerd[1511]: time="2024-12-13T13:29:28.030534760Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:28.030757 containerd[1511]: time="2024-12-13T13:29:28.030707840Z" level=info msg="RemovePodSandbox \"8de336bb24a8cb0086b8f572c9837fbcef606e51222ec77fed8af6cf43d59aba\" returns successfully" Dec 13 13:29:28.031325 containerd[1511]: time="2024-12-13T13:29:28.031296011Z" level=info msg="StopPodSandbox for \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\"" Dec 13 13:29:28.031560 containerd[1511]: time="2024-12-13T13:29:28.031534572Z" level=info msg="TearDown network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\" successfully" Dec 13 13:29:28.031933 containerd[1511]: time="2024-12-13T13:29:28.031664409Z" level=info msg="StopPodSandbox for \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\" returns successfully" Dec 13 13:29:28.032999 containerd[1511]: time="2024-12-13T13:29:28.032126146Z" level=info msg="RemovePodSandbox for \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\"" Dec 13 13:29:28.032999 containerd[1511]: time="2024-12-13T13:29:28.032154082Z" level=info msg="Forcibly stopping sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\"" Dec 13 13:29:28.032999 containerd[1511]: time="2024-12-13T13:29:28.032238176Z" level=info msg="TearDown network for sandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\" successfully" Dec 13 13:29:28.035315 containerd[1511]: time="2024-12-13T13:29:28.035276640Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:28.035483 containerd[1511]: time="2024-12-13T13:29:28.035454942Z" level=info msg="RemovePodSandbox \"aba91ac9a086da8fd01a24cf9941c354cf1078a88e1638d77aeb8f6fed182941\" returns successfully" Dec 13 13:29:28.036060 containerd[1511]: time="2024-12-13T13:29:28.036029908Z" level=info msg="StopPodSandbox for \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\"" Dec 13 13:29:28.036270 containerd[1511]: time="2024-12-13T13:29:28.036243469Z" level=info msg="TearDown network for sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\" successfully" Dec 13 13:29:28.036398 containerd[1511]: time="2024-12-13T13:29:28.036349246Z" level=info msg="StopPodSandbox for \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\" returns successfully" Dec 13 13:29:28.036922 containerd[1511]: time="2024-12-13T13:29:28.036892675Z" level=info msg="RemovePodSandbox for \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\"" Dec 13 13:29:28.037094 containerd[1511]: time="2024-12-13T13:29:28.037057727Z" level=info msg="Forcibly stopping sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\"" Dec 13 13:29:28.038224 containerd[1511]: time="2024-12-13T13:29:28.037255884Z" level=info msg="TearDown network for sandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\" successfully" Dec 13 13:29:28.039874 containerd[1511]: time="2024-12-13T13:29:28.039738171Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Dec 13 13:29:28.039874 containerd[1511]: time="2024-12-13T13:29:28.039785577Z" level=info msg="RemovePodSandbox \"e1e20ee54d22cab850d16600e38553b21a2f683c14719334033ec82e75ba71c1\" returns successfully" Dec 13 13:29:28.881762 kubelet[1929]: E1213 13:29:28.881660 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:29.882361 kubelet[1929]: E1213 13:29:29.882288 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:30.883598 kubelet[1929]: E1213 13:29:30.883498 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:31.884352 kubelet[1929]: E1213 13:29:31.884276 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:32.884646 kubelet[1929]: E1213 13:29:32.884527 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:33.885159 kubelet[1929]: E1213 13:29:33.885048 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:34.885517 kubelet[1929]: E1213 13:29:34.885436 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Dec 13 13:29:35.886327 kubelet[1929]: E1213 13:29:35.886232 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"