Feb 13 20:21:15.046106 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 17:40:15 -00 2025 Feb 13 20:21:15.046158 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 20:21:15.046173 kernel: BIOS-provided physical RAM map: Feb 13 20:21:15.046188 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:21:15.046199 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:21:15.046209 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:21:15.046221 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Feb 13 20:21:15.046232 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Feb 13 20:21:15.046243 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Feb 13 20:21:15.046253 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Feb 13 20:21:15.046264 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:21:15.046275 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:21:15.046289 kernel: NX (Execute Disable) protection: active Feb 13 20:21:15.046300 kernel: APIC: Static calls initialized Feb 13 20:21:15.046313 kernel: SMBIOS 2.8 present. Feb 13 20:21:15.046325 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Feb 13 20:21:15.046337 kernel: Hypervisor detected: KVM Feb 13 20:21:15.046352 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:21:15.046364 kernel: kvm-clock: using sched offset of 4477641317 cycles Feb 13 20:21:15.046377 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:21:15.046389 kernel: tsc: Detected 2499.998 MHz processor Feb 13 20:21:15.046401 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:21:15.046413 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:21:15.046424 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Feb 13 20:21:15.046436 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:21:15.046448 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:21:15.046464 kernel: Using GB pages for direct mapping Feb 13 20:21:15.046476 kernel: ACPI: Early table checksum verification disabled Feb 13 20:21:15.046488 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Feb 13 20:21:15.046500 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:21:15.046511 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:21:15.046535 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:21:15.046548 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Feb 13 20:21:15.046560 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:21:15.046572 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:21:15.046589 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:21:15.046601 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:21:15.046613 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Feb 13 20:21:15.046625 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Feb 13 20:21:15.046637 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Feb 13 20:21:15.046654 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Feb 13 20:21:15.046666 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Feb 13 20:21:15.046683 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Feb 13 20:21:15.046695 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Feb 13 20:21:15.046707 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:21:15.046720 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:21:15.046732 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Feb 13 20:21:15.046744 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Feb 13 20:21:15.046756 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Feb 13 20:21:15.046768 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Feb 13 20:21:15.046784 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Feb 13 20:21:15.046796 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Feb 13 20:21:15.046808 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Feb 13 20:21:15.046821 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Feb 13 20:21:15.046833 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Feb 13 20:21:15.046845 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Feb 13 20:21:15.046857 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Feb 13 20:21:15.046869 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Feb 13 20:21:15.046881 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Feb 13 20:21:15.046893 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Feb 13 20:21:15.046941 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:21:15.046954 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:21:15.046967 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Feb 13 20:21:15.046979 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Feb 13 20:21:15.046992 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Feb 13 20:21:15.047004 kernel: Zone ranges: Feb 13 20:21:15.047016 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:21:15.047029 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Feb 13 20:21:15.047041 kernel: Normal empty Feb 13 20:21:15.047059 kernel: Movable zone start for each node Feb 13 20:21:15.047071 kernel: Early memory node ranges Feb 13 20:21:15.047083 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:21:15.047102 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Feb 13 20:21:15.047114 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Feb 13 20:21:15.047126 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:21:15.047139 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:21:15.047152 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Feb 13 20:21:15.047164 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:21:15.047180 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:21:15.047193 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:21:15.047205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:21:15.047217 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:21:15.047230 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:21:15.047242 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:21:15.047254 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:21:15.047267 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:21:15.047279 kernel: TSC deadline timer available Feb 13 20:21:15.047295 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Feb 13 20:21:15.047308 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:21:15.047320 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Feb 13 20:21:15.047332 kernel: Booting paravirtualized kernel on KVM Feb 13 20:21:15.047345 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:21:15.047357 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Feb 13 20:21:15.047370 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u262144 Feb 13 20:21:15.047382 kernel: pcpu-alloc: s197032 r8192 d32344 u262144 alloc=1*2097152 Feb 13 20:21:15.047406 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Feb 13 20:21:15.047422 kernel: kvm-guest: PV spinlocks enabled Feb 13 20:21:15.047434 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 13 20:21:15.047447 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 20:21:15.047460 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:21:15.047472 kernel: random: crng init done Feb 13 20:21:15.047483 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:21:15.047495 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:21:15.047519 kernel: Fallback order for Node 0: 0 Feb 13 20:21:15.047549 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Feb 13 20:21:15.047562 kernel: Policy zone: DMA32 Feb 13 20:21:15.047574 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:21:15.047586 kernel: software IO TLB: area num 16. Feb 13 20:21:15.047599 kernel: Memory: 1899476K/2096616K available (14336K kernel code, 2301K rwdata, 22852K rodata, 43476K init, 1596K bss, 196880K reserved, 0K cma-reserved) Feb 13 20:21:15.047612 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Feb 13 20:21:15.047624 kernel: Kernel/User page tables isolation: enabled Feb 13 20:21:15.047636 kernel: ftrace: allocating 37893 entries in 149 pages Feb 13 20:21:15.047648 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:21:15.047665 kernel: Dynamic Preempt: voluntary Feb 13 20:21:15.047677 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:21:15.047691 kernel: rcu: RCU event tracing is enabled. Feb 13 20:21:15.047703 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Feb 13 20:21:15.047716 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:21:15.047739 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:21:15.047755 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:21:15.047769 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:21:15.047782 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Feb 13 20:21:15.047795 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Feb 13 20:21:15.047808 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:21:15.047820 kernel: Console: colour VGA+ 80x25 Feb 13 20:21:15.047837 kernel: printk: console [tty0] enabled Feb 13 20:21:15.047851 kernel: printk: console [ttyS0] enabled Feb 13 20:21:15.047864 kernel: ACPI: Core revision 20230628 Feb 13 20:21:15.047877 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:21:15.047889 kernel: x2apic enabled Feb 13 20:21:15.047929 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:21:15.047943 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 20:21:15.047956 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Feb 13 20:21:15.047969 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Feb 13 20:21:15.047982 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:21:15.047995 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:21:15.048008 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:21:15.048021 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:21:15.048033 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:21:15.048058 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:21:15.048076 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:21:15.048089 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:21:15.048114 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:21:15.048126 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:21:15.048137 kernel: MMIO Stale Data: Unknown: No mitigations Feb 13 20:21:15.048149 kernel: SRBDS: Unknown: Dependent on hypervisor status Feb 13 20:21:15.048161 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:21:15.048173 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:21:15.048185 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:21:15.048197 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:21:15.048213 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:21:15.048226 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:21:15.048238 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:21:15.048249 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:21:15.048261 kernel: landlock: Up and running. Feb 13 20:21:15.048273 kernel: SELinux: Initializing. Feb 13 20:21:15.048285 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:21:15.048297 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:21:15.048310 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Feb 13 20:21:15.048322 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:21:15.048334 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:21:15.048350 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Feb 13 20:21:15.048363 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Feb 13 20:21:15.048375 kernel: signal: max sigframe size: 1776 Feb 13 20:21:15.048387 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:21:15.048400 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:21:15.048412 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:21:15.048424 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:21:15.048436 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:21:15.048448 kernel: .... node #0, CPUs: #1 Feb 13 20:21:15.048464 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Feb 13 20:21:15.048477 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:21:15.048489 kernel: smpboot: Max logical packages: 16 Feb 13 20:21:15.048501 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Feb 13 20:21:15.048537 kernel: devtmpfs: initialized Feb 13 20:21:15.048551 kernel: x86/mm: Memory block size: 128MB Feb 13 20:21:15.048564 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:21:15.048577 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Feb 13 20:21:15.048590 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:21:15.048608 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:21:15.048621 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:21:15.048634 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:21:15.048647 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:21:15.048660 kernel: audit: type=2000 audit(1739478073.494:1): state=initialized audit_enabled=0 res=1 Feb 13 20:21:15.048673 kernel: cpuidle: using governor menu Feb 13 20:21:15.048686 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:21:15.048699 kernel: dca service started, version 1.12.1 Feb 13 20:21:15.048712 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Feb 13 20:21:15.048730 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Feb 13 20:21:15.048743 kernel: PCI: Using configuration type 1 for base access Feb 13 20:21:15.048756 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:21:15.048769 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Feb 13 20:21:15.048782 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Feb 13 20:21:15.048795 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:21:15.048808 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:21:15.048821 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:21:15.048834 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:21:15.048850 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:21:15.048864 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:21:15.048877 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:21:15.048889 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:21:15.048926 kernel: ACPI: Interpreter enabled Feb 13 20:21:15.048941 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:21:15.048954 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:21:15.048967 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:21:15.048980 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:21:15.048998 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Feb 13 20:21:15.049024 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:21:15.049334 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:21:15.049537 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Feb 13 20:21:15.049718 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Feb 13 20:21:15.049738 kernel: PCI host bridge to bus 0000:00 Feb 13 20:21:15.051318 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:21:15.051564 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:21:15.051737 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:21:15.052980 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Feb 13 20:21:15.053152 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Feb 13 20:21:15.053341 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Feb 13 20:21:15.053508 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:21:15.053762 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Feb 13 20:21:15.056057 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Feb 13 20:21:15.056276 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Feb 13 20:21:15.056470 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Feb 13 20:21:15.056665 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Feb 13 20:21:15.056847 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:21:15.057091 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Feb 13 20:21:15.057287 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Feb 13 20:21:15.057498 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Feb 13 20:21:15.057689 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Feb 13 20:21:15.057875 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Feb 13 20:21:15.059745 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Feb 13 20:21:15.060009 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Feb 13 20:21:15.060195 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Feb 13 20:21:15.060427 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Feb 13 20:21:15.060634 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Feb 13 20:21:15.060849 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Feb 13 20:21:15.062120 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Feb 13 20:21:15.062331 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Feb 13 20:21:15.062530 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Feb 13 20:21:15.062761 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Feb 13 20:21:15.064070 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Feb 13 20:21:15.064295 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:21:15.064517 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Feb 13 20:21:15.064721 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Feb 13 20:21:15.064908 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Feb 13 20:21:15.067176 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Feb 13 20:21:15.067401 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:21:15.067602 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:21:15.067783 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Feb 13 20:21:15.067984 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Feb 13 20:21:15.068198 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Feb 13 20:21:15.068380 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Feb 13 20:21:15.068632 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Feb 13 20:21:15.068813 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Feb 13 20:21:15.069030 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Feb 13 20:21:15.069251 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Feb 13 20:21:15.069430 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Feb 13 20:21:15.069682 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Feb 13 20:21:15.069890 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Feb 13 20:21:15.072145 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 20:21:15.072344 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 20:21:15.072540 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 20:21:15.072759 kernel: pci_bus 0000:02: extended config space not accessible Feb 13 20:21:15.073005 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Feb 13 20:21:15.073209 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Feb 13 20:21:15.073393 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 20:21:15.073590 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 20:21:15.073814 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Feb 13 20:21:15.076087 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Feb 13 20:21:15.076286 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 20:21:15.076467 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 20:21:15.076661 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 20:21:15.076886 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Feb 13 20:21:15.077105 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Feb 13 20:21:15.077294 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 20:21:15.077472 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 20:21:15.077665 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 20:21:15.077855 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 20:21:15.080102 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 20:21:15.080306 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 20:21:15.080486 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 20:21:15.080678 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 20:21:15.080857 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 20:21:15.081075 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 20:21:15.081273 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 20:21:15.081460 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 20:21:15.081652 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 20:21:15.081837 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 20:21:15.084054 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 20:21:15.084241 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 20:21:15.084443 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 20:21:15.084634 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 20:21:15.084656 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:21:15.084670 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:21:15.084683 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:21:15.084697 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:21:15.084718 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Feb 13 20:21:15.084731 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Feb 13 20:21:15.084744 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Feb 13 20:21:15.084758 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Feb 13 20:21:15.084770 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Feb 13 20:21:15.084784 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Feb 13 20:21:15.084797 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Feb 13 20:21:15.084810 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Feb 13 20:21:15.084823 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Feb 13 20:21:15.084841 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Feb 13 20:21:15.084854 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Feb 13 20:21:15.084867 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Feb 13 20:21:15.084880 kernel: iommu: Default domain type: Translated Feb 13 20:21:15.084893 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:21:15.084939 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:21:15.084953 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:21:15.084967 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:21:15.084980 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Feb 13 20:21:15.085162 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Feb 13 20:21:15.085339 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Feb 13 20:21:15.085518 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:21:15.085551 kernel: vgaarb: loaded Feb 13 20:21:15.085564 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:21:15.085577 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:21:15.085591 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:21:15.085604 kernel: pnp: PnP ACPI init Feb 13 20:21:15.085806 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Feb 13 20:21:15.085828 kernel: pnp: PnP ACPI: found 5 devices Feb 13 20:21:15.085842 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:21:15.085855 kernel: NET: Registered PF_INET protocol family Feb 13 20:21:15.085868 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:21:15.085881 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:21:15.091142 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:21:15.091159 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:21:15.091181 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:21:15.091194 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:21:15.091207 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:21:15.091221 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:21:15.091234 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:21:15.091247 kernel: NET: Registered PF_XDP protocol family Feb 13 20:21:15.091432 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Feb 13 20:21:15.091630 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Feb 13 20:21:15.091817 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Feb 13 20:21:15.092026 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Feb 13 20:21:15.092213 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Feb 13 20:21:15.092389 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Feb 13 20:21:15.092580 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Feb 13 20:21:15.092756 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Feb 13 20:21:15.092974 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Feb 13 20:21:15.093166 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Feb 13 20:21:15.093369 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Feb 13 20:21:15.093583 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Feb 13 20:21:15.093775 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Feb 13 20:21:15.093998 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Feb 13 20:21:15.094202 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Feb 13 20:21:15.094381 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Feb 13 20:21:15.094622 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Feb 13 20:21:15.094827 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Feb 13 20:21:15.095086 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Feb 13 20:21:15.095268 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Feb 13 20:21:15.095456 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Feb 13 20:21:15.095646 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 20:21:15.095821 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Feb 13 20:21:15.096013 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Feb 13 20:21:15.096196 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Feb 13 20:21:15.096371 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 20:21:15.096593 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Feb 13 20:21:15.096768 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Feb 13 20:21:15.101324 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Feb 13 20:21:15.101561 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 20:21:15.101752 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Feb 13 20:21:15.101952 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Feb 13 20:21:15.102131 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Feb 13 20:21:15.102308 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 20:21:15.102494 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Feb 13 20:21:15.102696 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Feb 13 20:21:15.102872 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Feb 13 20:21:15.103067 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 20:21:15.103242 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Feb 13 20:21:15.103429 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Feb 13 20:21:15.103622 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Feb 13 20:21:15.103804 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 20:21:15.108894 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Feb 13 20:21:15.109115 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Feb 13 20:21:15.109308 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Feb 13 20:21:15.109506 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 20:21:15.109698 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Feb 13 20:21:15.109874 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Feb 13 20:21:15.110091 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Feb 13 20:21:15.110272 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 20:21:15.110448 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:21:15.110632 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:21:15.110793 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:21:15.111037 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Feb 13 20:21:15.111196 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Feb 13 20:21:15.111356 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Feb 13 20:21:15.111559 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Feb 13 20:21:15.111728 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Feb 13 20:21:15.111916 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Feb 13 20:21:15.112160 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Feb 13 20:21:15.112358 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Feb 13 20:21:15.112568 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Feb 13 20:21:15.112734 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Feb 13 20:21:15.112944 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Feb 13 20:21:15.113126 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Feb 13 20:21:15.113294 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Feb 13 20:21:15.113514 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Feb 13 20:21:15.113721 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Feb 13 20:21:15.113906 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Feb 13 20:21:15.116214 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Feb 13 20:21:15.116389 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Feb 13 20:21:15.116573 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Feb 13 20:21:15.116772 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Feb 13 20:21:15.116970 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Feb 13 20:21:15.117139 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Feb 13 20:21:15.117335 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Feb 13 20:21:15.117503 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Feb 13 20:21:15.117684 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Feb 13 20:21:15.120045 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Feb 13 20:21:15.120246 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Feb 13 20:21:15.120432 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Feb 13 20:21:15.120455 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Feb 13 20:21:15.120470 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:21:15.120491 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 13 20:21:15.120506 kernel: software IO TLB: mapped [mem 0x0000000071000000-0x0000000075000000] (64MB) Feb 13 20:21:15.120532 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:21:15.120548 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Feb 13 20:21:15.120562 kernel: Initialise system trusted keyrings Feb 13 20:21:15.120581 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:21:15.120595 kernel: Key type asymmetric registered Feb 13 20:21:15.120609 kernel: Asymmetric key parser 'x509' registered Feb 13 20:21:15.120622 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:21:15.120636 kernel: io scheduler mq-deadline registered Feb 13 20:21:15.120650 kernel: io scheduler kyber registered Feb 13 20:21:15.120663 kernel: io scheduler bfq registered Feb 13 20:21:15.120846 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Feb 13 20:21:15.121751 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Feb 13 20:21:15.122028 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:21:15.122235 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Feb 13 20:21:15.122414 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Feb 13 20:21:15.122609 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:21:15.122788 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Feb 13 20:21:15.122992 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Feb 13 20:21:15.123172 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:21:15.123361 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Feb 13 20:21:15.123553 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Feb 13 20:21:15.123730 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:21:15.123928 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Feb 13 20:21:15.124122 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Feb 13 20:21:15.124311 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:21:15.124491 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Feb 13 20:21:15.124683 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Feb 13 20:21:15.124885 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:21:15.125080 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Feb 13 20:21:15.125258 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Feb 13 20:21:15.125444 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:21:15.125642 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Feb 13 20:21:15.125837 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Feb 13 20:21:15.126057 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Feb 13 20:21:15.126080 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:21:15.126094 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Feb 13 20:21:15.126122 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Feb 13 20:21:15.126135 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:21:15.126149 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:21:15.126162 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:21:15.126175 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:21:15.126188 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:21:15.126214 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:21:15.126424 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:21:15.126612 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:21:15.126792 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:21:14 UTC (1739478074) Feb 13 20:21:15.127035 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:21:15.127057 kernel: intel_pstate: CPU model not supported Feb 13 20:21:15.127071 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:21:15.127085 kernel: Segment Routing with IPv6 Feb 13 20:21:15.127098 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:21:15.127112 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:21:15.127126 kernel: Key type dns_resolver registered Feb 13 20:21:15.127147 kernel: IPI shorthand broadcast: enabled Feb 13 20:21:15.127161 kernel: sched_clock: Marking stable (1247003686, 241195968)->(1612866495, -124666841) Feb 13 20:21:15.127175 kernel: registered taskstats version 1 Feb 13 20:21:15.127189 kernel: Loading compiled-in X.509 certificates Feb 13 20:21:15.127207 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6c364ddae48101e091a28279a8d953535f596d53' Feb 13 20:21:15.127220 kernel: Key type .fscrypt registered Feb 13 20:21:15.127234 kernel: Key type fscrypt-provisioning registered Feb 13 20:21:15.127247 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:21:15.127261 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:21:15.127280 kernel: ima: No architecture policies found Feb 13 20:21:15.127293 kernel: clk: Disabling unused clocks Feb 13 20:21:15.127307 kernel: Freeing unused kernel image (initmem) memory: 43476K Feb 13 20:21:15.127321 kernel: Write protecting the kernel read-only data: 38912k Feb 13 20:21:15.127334 kernel: Freeing unused kernel image (rodata/data gap) memory: 1724K Feb 13 20:21:15.127348 kernel: Run /init as init process Feb 13 20:21:15.127361 kernel: with arguments: Feb 13 20:21:15.127375 kernel: /init Feb 13 20:21:15.127388 kernel: with environment: Feb 13 20:21:15.127406 kernel: HOME=/ Feb 13 20:21:15.127419 kernel: TERM=linux Feb 13 20:21:15.127432 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:21:15.127456 systemd[1]: Successfully made /usr/ read-only. Feb 13 20:21:15.127476 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 20:21:15.127492 systemd[1]: Detected virtualization kvm. Feb 13 20:21:15.127506 systemd[1]: Detected architecture x86-64. Feb 13 20:21:15.127529 systemd[1]: Running in initrd. Feb 13 20:21:15.127551 systemd[1]: No hostname configured, using default hostname. Feb 13 20:21:15.127566 systemd[1]: Hostname set to . Feb 13 20:21:15.127580 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:21:15.127594 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:21:15.127609 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:21:15.127623 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:21:15.127639 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:21:15.127653 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:21:15.127673 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:21:15.127689 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:21:15.127705 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:21:15.127720 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:21:15.127734 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:21:15.127749 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:21:15.127768 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:21:15.127783 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:21:15.127798 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:21:15.127812 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:21:15.127827 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:21:15.127841 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:21:15.127856 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:21:15.127871 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Feb 13 20:21:15.127885 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:21:15.127932 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:21:15.127948 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:21:15.127963 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:21:15.127977 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:21:15.127992 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:21:15.128007 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:21:15.128021 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:21:15.128036 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:21:15.128050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:21:15.128071 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:21:15.128086 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:21:15.128100 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:21:15.128116 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:21:15.128190 systemd-journald[200]: Collecting audit messages is disabled. Feb 13 20:21:15.128226 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:21:15.128242 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:21:15.128256 kernel: Bridge firewalling registered Feb 13 20:21:15.128277 systemd-journald[200]: Journal started Feb 13 20:21:15.128310 systemd-journald[200]: Runtime Journal (/run/log/journal/ffe85df70d00443e93da176fde014290) is 4.7M, max 37.9M, 33.2M free. Feb 13 20:21:15.057989 systemd-modules-load[201]: Inserted module 'overlay' Feb 13 20:21:15.131517 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:21:15.120982 systemd-modules-load[201]: Inserted module 'br_netfilter' Feb 13 20:21:15.134775 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:21:15.136025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:21:15.137052 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:21:15.152252 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:21:15.155346 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:21:15.158097 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:21:15.165068 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:21:15.183731 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:21:15.185072 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:21:15.200125 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:21:15.202259 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:21:15.204415 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:21:15.216321 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:21:15.230930 dracut-cmdline[239]: dracut-dracut-053 Feb 13 20:21:15.239101 dracut-cmdline[239]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=f28373bbaddf11103b551b595069cf5faacb27d62f1aab4f9911393ba418b416 Feb 13 20:21:15.242852 systemd-resolved[236]: Positive Trust Anchors: Feb 13 20:21:15.242882 systemd-resolved[236]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:21:15.242945 systemd-resolved[236]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:21:15.247560 systemd-resolved[236]: Defaulting to hostname 'linux'. Feb 13 20:21:15.250460 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:21:15.252101 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:21:15.347950 kernel: SCSI subsystem initialized Feb 13 20:21:15.359961 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:21:15.372923 kernel: iscsi: registered transport (tcp) Feb 13 20:21:15.399294 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:21:15.399341 kernel: QLogic iSCSI HBA Driver Feb 13 20:21:15.454811 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:21:15.464158 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:21:15.495833 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:21:15.495893 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:21:15.498924 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:21:15.546958 kernel: raid6: sse2x4 gen() 13209 MB/s Feb 13 20:21:15.563974 kernel: raid6: sse2x2 gen() 9203 MB/s Feb 13 20:21:15.582644 kernel: raid6: sse2x1 gen() 9401 MB/s Feb 13 20:21:15.582684 kernel: raid6: using algorithm sse2x4 gen() 13209 MB/s Feb 13 20:21:15.601772 kernel: raid6: .... xor() 7570 MB/s, rmw enabled Feb 13 20:21:15.601811 kernel: raid6: using ssse3x2 recovery algorithm Feb 13 20:21:15.627950 kernel: xor: automatically using best checksumming function avx Feb 13 20:21:15.801949 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:21:15.816869 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:21:15.824159 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:21:15.860332 systemd-udevd[421]: Using default interface naming scheme 'v255'. Feb 13 20:21:15.869497 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:21:15.878129 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:21:15.898654 dracut-pre-trigger[427]: rd.md=0: removing MD RAID activation Feb 13 20:21:15.938622 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:21:15.952217 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:21:16.068970 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:21:16.074324 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:21:16.103623 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:21:16.106078 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:21:16.108141 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:21:16.109180 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:21:16.117239 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:21:16.146659 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:21:16.235316 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Feb 13 20:21:16.300857 kernel: libata version 3.00 loaded. Feb 13 20:21:16.300884 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:21:16.301118 kernel: ACPI: bus type USB registered Feb 13 20:21:16.301145 kernel: usbcore: registered new interface driver usbfs Feb 13 20:21:16.301163 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:21:16.301418 kernel: ahci 0000:00:1f.2: version 3.0 Feb 13 20:21:16.337257 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Feb 13 20:21:16.337300 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:21:16.337334 kernel: usbcore: registered new interface driver hub Feb 13 20:21:16.337353 kernel: GPT:17805311 != 125829119 Feb 13 20:21:16.337372 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:21:16.337390 kernel: GPT:17805311 != 125829119 Feb 13 20:21:16.337408 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:21:16.337426 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:21:16.337445 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Feb 13 20:21:16.337707 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Feb 13 20:21:16.337970 kernel: scsi host0: ahci Feb 13 20:21:16.338240 kernel: scsi host1: ahci Feb 13 20:21:16.338500 kernel: AVX version of gcm_enc/dec engaged. Feb 13 20:21:16.338525 kernel: usbcore: registered new device driver usb Feb 13 20:21:16.338544 kernel: AES CTR mode by8 optimization enabled Feb 13 20:21:16.338562 kernel: scsi host2: ahci Feb 13 20:21:16.338814 kernel: scsi host3: ahci Feb 13 20:21:16.340023 kernel: scsi host4: ahci Feb 13 20:21:16.340298 kernel: scsi host5: ahci Feb 13 20:21:16.340550 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 38 Feb 13 20:21:16.340574 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 38 Feb 13 20:21:16.340593 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 38 Feb 13 20:21:16.340612 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 38 Feb 13 20:21:16.340630 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 38 Feb 13 20:21:16.340658 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 38 Feb 13 20:21:16.304290 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:21:16.304552 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:21:16.311108 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:21:16.311869 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:21:16.312146 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:21:16.370169 kernel: BTRFS: device fsid 60f89c25-9096-4268-99ca-ef7992742f2b devid 1 transid 39 /dev/vda3 scanned by (udev-worker) (472) Feb 13 20:21:16.315706 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:21:16.341514 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:21:16.383998 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (473) Feb 13 20:21:16.449018 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:21:16.481793 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:21:16.495831 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:21:16.509098 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:21:16.519942 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:21:16.520774 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:21:16.534214 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:21:16.539081 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:21:16.541361 disk-uuid[559]: Primary Header is updated. Feb 13 20:21:16.541361 disk-uuid[559]: Secondary Entries is updated. Feb 13 20:21:16.541361 disk-uuid[559]: Secondary Header is updated. Feb 13 20:21:16.551918 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:21:16.560965 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:21:16.570844 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:21:16.649809 kernel: ata5: SATA link down (SStatus 0 SControl 300) Feb 13 20:21:16.649887 kernel: ata3: SATA link down (SStatus 0 SControl 300) Feb 13 20:21:16.651928 kernel: ata2: SATA link down (SStatus 0 SControl 300) Feb 13 20:21:16.655366 kernel: ata6: SATA link down (SStatus 0 SControl 300) Feb 13 20:21:16.655409 kernel: ata1: SATA link down (SStatus 0 SControl 300) Feb 13 20:21:16.658917 kernel: ata4: SATA link down (SStatus 0 SControl 300) Feb 13 20:21:16.674925 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 20:21:16.706633 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Feb 13 20:21:16.706872 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Feb 13 20:21:16.707114 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Feb 13 20:21:16.707338 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Feb 13 20:21:16.707573 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Feb 13 20:21:16.707795 kernel: hub 1-0:1.0: USB hub found Feb 13 20:21:16.708064 kernel: hub 1-0:1.0: 4 ports detected Feb 13 20:21:16.708282 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Feb 13 20:21:16.708530 kernel: hub 2-0:1.0: USB hub found Feb 13 20:21:16.708749 kernel: hub 2-0:1.0: 4 ports detected Feb 13 20:21:16.939929 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Feb 13 20:21:17.082957 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 13 20:21:17.090354 kernel: usbcore: registered new interface driver usbhid Feb 13 20:21:17.090394 kernel: usbhid: USB HID core driver Feb 13 20:21:17.098566 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Feb 13 20:21:17.098608 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Feb 13 20:21:17.560930 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:21:17.564349 disk-uuid[560]: The operation has completed successfully. Feb 13 20:21:17.627549 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:21:17.627732 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:21:17.686169 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:21:17.690337 sh[586]: Success Feb 13 20:21:17.706931 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Feb 13 20:21:17.770473 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:21:17.784057 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:21:17.786493 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:21:17.812939 kernel: BTRFS info (device dm-0): first mount of filesystem 60f89c25-9096-4268-99ca-ef7992742f2b Feb 13 20:21:17.812998 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:21:17.817624 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:21:17.817671 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:21:17.820887 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:21:17.830045 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:21:17.832167 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:21:17.841104 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:21:17.845081 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:21:17.864940 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:21:17.864990 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:21:17.865010 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:21:17.870991 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:21:17.886745 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:21:17.886331 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:21:17.895478 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:21:17.902082 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:21:18.017793 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:21:18.029291 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:21:18.041507 ignition[690]: Ignition 2.20.0 Feb 13 20:21:18.041533 ignition[690]: Stage: fetch-offline Feb 13 20:21:18.041616 ignition[690]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:21:18.041635 ignition[690]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:21:18.041773 ignition[690]: parsed url from cmdline: "" Feb 13 20:21:18.041780 ignition[690]: no config URL provided Feb 13 20:21:18.041790 ignition[690]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:21:18.041806 ignition[690]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:21:18.041822 ignition[690]: failed to fetch config: resource requires networking Feb 13 20:21:18.049301 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:21:18.047220 ignition[690]: Ignition finished successfully Feb 13 20:21:18.070948 systemd-networkd[773]: lo: Link UP Feb 13 20:21:18.070959 systemd-networkd[773]: lo: Gained carrier Feb 13 20:21:18.075431 systemd-networkd[773]: Enumeration completed Feb 13 20:21:18.076008 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:21:18.076015 systemd-networkd[773]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:21:18.077376 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:21:18.078109 systemd-networkd[773]: eth0: Link UP Feb 13 20:21:18.078115 systemd-networkd[773]: eth0: Gained carrier Feb 13 20:21:18.078128 systemd-networkd[773]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:21:18.081082 systemd[1]: Reached target network.target - Network. Feb 13 20:21:18.088990 systemd-networkd[773]: eth0: DHCPv4 address 10.230.12.214/30, gateway 10.230.12.213 acquired from 10.230.12.213 Feb 13 20:21:18.091455 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:21:18.109026 ignition[778]: Ignition 2.20.0 Feb 13 20:21:18.109046 ignition[778]: Stage: fetch Feb 13 20:21:18.109278 ignition[778]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:21:18.109298 ignition[778]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:21:18.109429 ignition[778]: parsed url from cmdline: "" Feb 13 20:21:18.109448 ignition[778]: no config URL provided Feb 13 20:21:18.109463 ignition[778]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:21:18.109480 ignition[778]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:21:18.109626 ignition[778]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Feb 13 20:21:18.109780 ignition[778]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Feb 13 20:21:18.109811 ignition[778]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Feb 13 20:21:18.124472 ignition[778]: GET result: OK Feb 13 20:21:18.124620 ignition[778]: parsing config with SHA512: 8cd11f4afd3d527b012d88d43e03c8b2d47ca94f9a6aaff0c9effb9dfec31206335ac9adb08d0692e6c4179bdeda53a72a5b11bf6b942824d19bc8fce513e3ef Feb 13 20:21:18.130101 unknown[778]: fetched base config from "system" Feb 13 20:21:18.130134 unknown[778]: fetched base config from "system" Feb 13 20:21:18.130542 ignition[778]: fetch: fetch complete Feb 13 20:21:18.130143 unknown[778]: fetched user config from "openstack" Feb 13 20:21:18.130552 ignition[778]: fetch: fetch passed Feb 13 20:21:18.130651 ignition[778]: Ignition finished successfully Feb 13 20:21:18.133373 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:21:18.151188 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:21:18.171137 ignition[785]: Ignition 2.20.0 Feb 13 20:21:18.171178 ignition[785]: Stage: kargs Feb 13 20:21:18.173556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:21:18.171432 ignition[785]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:21:18.171478 ignition[785]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:21:18.172340 ignition[785]: kargs: kargs passed Feb 13 20:21:18.172405 ignition[785]: Ignition finished successfully Feb 13 20:21:18.185153 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:21:18.201003 ignition[791]: Ignition 2.20.0 Feb 13 20:21:18.201026 ignition[791]: Stage: disks Feb 13 20:21:18.201233 ignition[791]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:21:18.203260 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:21:18.201253 ignition[791]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:21:18.204605 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:21:18.202096 ignition[791]: disks: disks passed Feb 13 20:21:18.205493 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:21:18.202166 ignition[791]: Ignition finished successfully Feb 13 20:21:18.207027 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:21:18.208582 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:21:18.210192 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:21:18.225104 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:21:18.244129 systemd-fsck[800]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Feb 13 20:21:18.248811 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:21:18.819046 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:21:18.928219 kernel: EXT4-fs (vda9): mounted filesystem 157595f2-1515-4117-a2d1-73fe2ed647fc r/w with ordered data mode. Quota mode: none. Feb 13 20:21:18.929378 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:21:18.930721 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:21:18.946047 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:21:18.949645 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:21:18.951285 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Feb 13 20:21:18.958093 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Feb 13 20:21:18.970242 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (808) Feb 13 20:21:18.970276 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:21:18.970296 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:21:18.970315 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:21:18.969213 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:21:18.969261 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:21:18.973163 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:21:18.986256 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:21:18.990124 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:21:18.994063 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:21:19.059809 initrd-setup-root[836]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:21:19.070087 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:21:19.077383 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:21:19.082945 initrd-setup-root[857]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:21:19.190470 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:21:19.197023 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:21:19.200144 systemd-networkd[773]: eth0: Gained IPv6LL Feb 13 20:21:19.202306 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:21:19.214933 kernel: BTRFS info (device vda6): last unmount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:21:19.247684 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:21:19.251934 ignition[924]: INFO : Ignition 2.20.0 Feb 13 20:21:19.251934 ignition[924]: INFO : Stage: mount Feb 13 20:21:19.251934 ignition[924]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:21:19.251934 ignition[924]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:21:19.256167 ignition[924]: INFO : mount: mount passed Feb 13 20:21:19.256167 ignition[924]: INFO : Ignition finished successfully Feb 13 20:21:19.255309 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:21:19.811665 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:21:20.707132 systemd-networkd[773]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8335:24:19ff:fee6:cd6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8335:24:19ff:fee6:cd6/64 assigned by NDisc. Feb 13 20:21:20.707149 systemd-networkd[773]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 20:21:26.106294 coreos-metadata[810]: Feb 13 20:21:26.106 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:21:26.132340 coreos-metadata[810]: Feb 13 20:21:26.132 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 20:21:26.145760 coreos-metadata[810]: Feb 13 20:21:26.145 INFO Fetch successful Feb 13 20:21:26.147307 coreos-metadata[810]: Feb 13 20:21:26.147 INFO wrote hostname srv-xiw8u.gb1.brightbox.com to /sysroot/etc/hostname Feb 13 20:21:26.153002 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Feb 13 20:21:26.153329 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Feb 13 20:21:26.163048 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:21:26.189310 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:21:26.201963 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Feb 13 20:21:26.207475 kernel: BTRFS info (device vda6): first mount of filesystem 9d862461-eab1-477f-8790-b61f63b2958e Feb 13 20:21:26.207560 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:21:26.207590 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:21:26.214000 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:21:26.216893 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:21:26.244058 ignition[960]: INFO : Ignition 2.20.0 Feb 13 20:21:26.244058 ignition[960]: INFO : Stage: files Feb 13 20:21:26.245932 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:21:26.245932 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:21:26.245932 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:21:26.248733 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:21:26.248733 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:21:26.250763 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:21:26.251936 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:21:26.253370 unknown[960]: wrote ssh authorized keys file for user: core Feb 13 20:21:26.254366 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:21:26.255431 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:21:26.256569 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:21:26.256569 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:21:26.256569 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:21:26.256569 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:21:26.256569 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:21:26.256569 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:21:26.256569 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 20:21:26.858207 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 20:21:30.062927 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:21:30.064866 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:21:30.064866 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:21:30.064866 ignition[960]: INFO : files: files passed Feb 13 20:21:30.064866 ignition[960]: INFO : Ignition finished successfully Feb 13 20:21:30.066005 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:21:30.076107 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:21:30.079124 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:21:30.086102 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:21:30.086306 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:21:30.104515 initrd-setup-root-after-ignition[989]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:21:30.105928 initrd-setup-root-after-ignition[989]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:21:30.107943 initrd-setup-root-after-ignition[993]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:21:30.110199 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:21:30.111725 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:21:30.123173 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:21:30.160348 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:21:30.160599 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:21:30.162747 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:21:30.163841 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:21:30.165500 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:21:30.176181 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:21:30.194961 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:21:30.201183 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:21:30.218132 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:21:30.219090 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:21:30.220035 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:21:30.221612 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:21:30.221809 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:21:30.223697 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:21:30.224595 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:21:30.225990 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:21:30.227615 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:21:30.229085 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:21:30.230454 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:21:30.231968 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:21:30.233620 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:21:30.235166 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:21:30.236718 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:21:30.238258 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:21:30.238451 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:21:30.240294 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:21:30.241246 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:21:30.242657 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:21:30.243041 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:21:30.244256 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:21:30.244433 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:21:30.246336 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:21:30.246517 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:21:30.248349 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:21:30.248521 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:21:30.256168 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:21:30.260144 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:21:30.261571 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:21:30.261781 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:21:30.262737 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:21:30.268973 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:21:30.278803 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:21:30.279751 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:21:30.287429 ignition[1013]: INFO : Ignition 2.20.0 Feb 13 20:21:30.287429 ignition[1013]: INFO : Stage: umount Feb 13 20:21:30.290014 ignition[1013]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:21:30.290014 ignition[1013]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Feb 13 20:21:30.290014 ignition[1013]: INFO : umount: umount passed Feb 13 20:21:30.290014 ignition[1013]: INFO : Ignition finished successfully Feb 13 20:21:30.291307 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:21:30.292195 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:21:30.293786 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:21:30.296082 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:21:30.297086 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:21:30.297175 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:21:30.297897 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:21:30.300096 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:21:30.303182 systemd[1]: Stopped target network.target - Network. Feb 13 20:21:30.304203 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:21:30.304283 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:21:30.305084 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:21:30.305718 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:21:30.306654 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:21:30.309991 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:21:30.310618 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:21:30.311419 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:21:30.311495 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:21:30.312776 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:21:30.312842 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:21:30.320499 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:21:30.320595 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:21:30.322311 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:21:30.322387 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:21:30.323962 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:21:30.325758 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:21:30.329681 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:21:30.332577 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:21:30.332861 systemd-networkd[773]: eth0: DHCPv6 lease lost Feb 13 20:21:30.333442 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:21:30.339538 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Feb 13 20:21:30.340333 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:21:30.340496 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:21:30.342263 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:21:30.342436 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:21:30.344931 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Feb 13 20:21:30.347377 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:21:30.347479 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:21:30.349030 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:21:30.349123 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:21:30.359214 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:21:30.360271 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:21:30.360357 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:21:30.362411 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:21:30.362508 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:21:30.365227 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:21:30.365301 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:21:30.366262 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:21:30.366336 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:21:30.368078 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:21:30.370871 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 13 20:21:30.370991 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Feb 13 20:21:30.384969 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:21:30.385238 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:21:30.386586 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:21:30.386737 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:21:30.389398 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:21:30.389526 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:21:30.390966 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:21:30.391039 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:21:30.392447 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:21:30.392526 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:21:30.394849 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:21:30.395022 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:21:30.397285 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:21:30.397369 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:21:30.405175 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:21:30.405989 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:21:30.406092 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:21:30.408809 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:21:30.408884 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:21:30.409687 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:21:30.409760 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:21:30.413223 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:21:30.413297 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:21:30.417319 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 13 20:21:30.417416 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Feb 13 20:21:30.418259 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:21:30.418432 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:21:30.420670 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:21:30.431597 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:21:30.440880 systemd[1]: Switching root. Feb 13 20:21:30.479621 systemd-journald[200]: Journal stopped Feb 13 20:21:32.124875 systemd-journald[200]: Received SIGTERM from PID 1 (systemd). Feb 13 20:21:32.125002 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:21:32.125071 kernel: SELinux: policy capability open_perms=1 Feb 13 20:21:32.125102 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:21:32.125122 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:21:32.125140 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:21:32.125159 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:21:32.125178 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:21:32.125197 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:21:32.125216 kernel: audit: type=1403 audit(1739478090.725:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:21:32.125258 systemd[1]: Successfully loaded SELinux policy in 56.567ms. Feb 13 20:21:32.125297 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 25.424ms. Feb 13 20:21:32.125321 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Feb 13 20:21:32.125350 systemd[1]: Detected virtualization kvm. Feb 13 20:21:32.125379 systemd[1]: Detected architecture x86-64. Feb 13 20:21:32.125400 systemd[1]: Detected first boot. Feb 13 20:21:32.125421 systemd[1]: Hostname set to . Feb 13 20:21:32.125447 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:21:32.125467 zram_generator::config[1058]: No configuration found. Feb 13 20:21:32.125501 kernel: Guest personality initialized and is inactive Feb 13 20:21:32.125522 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Feb 13 20:21:32.125541 kernel: Initialized host personality Feb 13 20:21:32.125560 kernel: NET: Registered PF_VSOCK protocol family Feb 13 20:21:32.125579 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:21:32.125601 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Feb 13 20:21:32.125623 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:21:32.125651 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:21:32.125686 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:21:32.125709 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:21:32.125731 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:21:32.125752 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:21:32.125785 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:21:32.125815 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:21:32.125837 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:21:32.125859 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:21:32.136421 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:21:32.136481 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:21:32.136506 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:21:32.136539 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:21:32.136562 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:21:32.136586 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:21:32.136626 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:21:32.136650 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:21:32.136677 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:21:32.136698 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:21:32.136719 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:21:32.136740 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:21:32.136761 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:21:32.136783 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:21:32.136804 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:21:32.136825 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:21:32.136867 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:21:32.136890 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:21:32.136933 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:21:32.136956 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Feb 13 20:21:32.136978 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:21:32.137000 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:21:32.137030 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:21:32.137052 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:21:32.137092 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:21:32.137140 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:21:32.137177 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:21:32.137200 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:21:32.137222 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:21:32.137243 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:21:32.137277 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:21:32.137302 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:21:32.137323 systemd[1]: Reached target machines.target - Containers. Feb 13 20:21:32.137345 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:21:32.137366 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:21:32.137388 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:21:32.137410 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:21:32.137445 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:21:32.137469 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:21:32.137503 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:21:32.137526 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:21:32.137547 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:21:32.137569 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:21:32.137590 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:21:32.137611 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:21:32.137632 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:21:32.137653 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:21:32.137687 kernel: fuse: init (API version 7.39) Feb 13 20:21:32.137712 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 20:21:32.137743 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:21:32.137770 kernel: loop: module loaded Feb 13 20:21:32.137789 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:21:32.137809 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:21:32.137838 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:21:32.137859 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Feb 13 20:21:32.137879 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:21:32.137935 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:21:32.143684 systemd[1]: Stopped verity-setup.service. Feb 13 20:21:32.143732 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:21:32.143784 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:21:32.143808 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:21:32.143834 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:21:32.143855 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:21:32.143885 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:21:32.144015 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:21:32.144043 kernel: ACPI: bus type drm_connector registered Feb 13 20:21:32.144097 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:21:32.144122 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:21:32.144144 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:21:32.144165 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:21:32.144186 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:21:32.144208 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:21:32.144229 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:21:32.144308 systemd-journald[1155]: Collecting audit messages is disabled. Feb 13 20:21:32.144369 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:21:32.144393 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:21:32.144415 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:21:32.144436 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:21:32.144458 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:21:32.144492 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:21:32.144514 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:21:32.144549 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:21:32.144572 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:21:32.144594 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:21:32.144627 systemd-journald[1155]: Journal started Feb 13 20:21:32.144672 systemd-journald[1155]: Runtime Journal (/run/log/journal/ffe85df70d00443e93da176fde014290) is 4.7M, max 37.9M, 33.2M free. Feb 13 20:21:31.647348 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:21:31.660284 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:21:31.661139 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:21:32.153095 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:21:32.151183 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Feb 13 20:21:32.169994 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:21:32.180852 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:21:32.189054 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:21:32.190047 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:21:32.190147 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:21:32.193421 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Feb 13 20:21:32.200841 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:21:32.210020 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:21:32.211562 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:21:32.219133 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:21:32.223171 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:21:32.225429 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:21:32.233991 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:21:32.236038 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:21:32.247954 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:21:32.259176 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:21:32.269230 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:21:32.281312 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:21:32.286768 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:21:32.290405 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:21:32.332166 systemd-journald[1155]: Time spent on flushing to /var/log/journal/ffe85df70d00443e93da176fde014290 is 76.231ms for 1140 entries. Feb 13 20:21:32.332166 systemd-journald[1155]: System Journal (/var/log/journal/ffe85df70d00443e93da176fde014290) is 8M, max 584.8M, 576.8M free. Feb 13 20:21:32.445490 systemd-journald[1155]: Received client request to flush runtime journal. Feb 13 20:21:32.445651 kernel: loop0: detected capacity change from 0 to 147912 Feb 13 20:21:32.445685 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:21:32.331087 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:21:32.334401 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:21:32.346179 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Feb 13 20:21:32.414419 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:21:32.433346 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:21:32.449666 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:21:32.453702 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:21:32.464021 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Feb 13 20:21:32.468489 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 20:21:32.468519 systemd-tmpfiles[1198]: ACLs are not supported, ignoring. Feb 13 20:21:32.485980 kernel: loop1: detected capacity change from 0 to 8 Feb 13 20:21:32.488853 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:21:32.502085 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:21:32.505591 udevadm[1209]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 13 20:21:32.541506 kernel: loop2: detected capacity change from 0 to 138176 Feb 13 20:21:32.583400 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:21:32.594194 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:21:32.601997 kernel: loop3: detected capacity change from 0 to 218376 Feb 13 20:21:32.650953 kernel: loop4: detected capacity change from 0 to 147912 Feb 13 20:21:32.664559 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:21:32.681848 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Feb 13 20:21:32.681879 systemd-tmpfiles[1222]: ACLs are not supported, ignoring. Feb 13 20:21:32.695340 kernel: loop5: detected capacity change from 0 to 8 Feb 13 20:21:32.703063 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:21:32.710108 kernel: loop6: detected capacity change from 0 to 138176 Feb 13 20:21:32.749939 kernel: loop7: detected capacity change from 0 to 218376 Feb 13 20:21:32.777475 (sd-merge)[1225]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Feb 13 20:21:32.778547 (sd-merge)[1225]: Merged extensions into '/usr'. Feb 13 20:21:32.792120 systemd[1]: Reload requested from client PID 1197 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:21:32.792167 systemd[1]: Reloading... Feb 13 20:21:32.934991 zram_generator::config[1253]: No configuration found. Feb 13 20:21:33.013327 ldconfig[1192]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:21:33.227151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:21:33.331630 systemd[1]: Reloading finished in 538 ms. Feb 13 20:21:33.346219 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:21:33.352120 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:21:33.371241 systemd[1]: Starting ensure-sysext.service... Feb 13 20:21:33.375114 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:21:33.413966 systemd[1]: Reload requested from client PID 1310 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:21:33.414002 systemd[1]: Reloading... Feb 13 20:21:33.424127 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:21:33.424619 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:21:33.428106 systemd-tmpfiles[1311]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:21:33.428522 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Feb 13 20:21:33.428650 systemd-tmpfiles[1311]: ACLs are not supported, ignoring. Feb 13 20:21:33.439279 systemd-tmpfiles[1311]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:21:33.439297 systemd-tmpfiles[1311]: Skipping /boot Feb 13 20:21:33.473389 systemd-tmpfiles[1311]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:21:33.473410 systemd-tmpfiles[1311]: Skipping /boot Feb 13 20:21:33.526130 zram_generator::config[1340]: No configuration found. Feb 13 20:21:33.737238 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:21:33.840689 systemd[1]: Reloading finished in 426 ms. Feb 13 20:21:33.860726 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:21:33.876543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:21:33.891333 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:21:33.899494 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:21:33.905297 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:21:33.917388 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:21:33.922783 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:21:33.932259 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:21:33.940213 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:21:33.940528 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:21:33.946625 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:21:33.953312 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:21:33.963235 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:21:33.965052 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:21:33.965563 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 20:21:33.965887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:21:33.983259 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:21:33.987942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:21:33.988253 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:21:33.988544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:21:33.988717 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 20:21:33.988856 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:21:34.000317 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:21:34.003139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:21:34.018480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:21:34.019518 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:21:34.019694 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Feb 13 20:21:34.019999 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:21:34.032716 systemd[1]: Finished ensure-sysext.service. Feb 13 20:21:34.034678 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:21:34.035723 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:21:34.037879 systemd-udevd[1403]: Using default interface naming scheme 'v255'. Feb 13 20:21:34.048460 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:21:34.052485 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:21:34.053802 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:21:34.055305 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:21:34.055599 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:21:34.057789 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:21:34.058115 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:21:34.068301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:21:34.068466 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:21:34.077164 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:21:34.094385 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:21:34.096401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:21:34.101029 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:21:34.102224 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:21:34.119054 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:21:34.129968 augenrules[1439]: No rules Feb 13 20:21:34.130942 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:21:34.143185 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:21:34.145798 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:21:34.146207 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:21:34.148230 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:21:34.322567 systemd-resolved[1402]: Positive Trust Anchors: Feb 13 20:21:34.322591 systemd-resolved[1402]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:21:34.322637 systemd-resolved[1402]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:21:34.335134 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:21:34.345806 systemd-resolved[1402]: Using system hostname 'srv-xiw8u.gb1.brightbox.com'. Feb 13 20:21:34.358200 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:21:34.361116 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:21:34.364137 systemd-networkd[1449]: lo: Link UP Feb 13 20:21:34.364150 systemd-networkd[1449]: lo: Gained carrier Feb 13 20:21:34.365619 systemd-networkd[1449]: Enumeration completed Feb 13 20:21:34.365743 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:21:34.371172 systemd[1]: Reached target network.target - Network. Feb 13 20:21:34.382198 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Feb 13 20:21:34.393124 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:21:34.416632 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:21:34.418890 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:21:34.418951 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:21:34.419063 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Feb 13 20:21:34.422059 systemd-networkd[1449]: eth0: Link UP Feb 13 20:21:34.422074 systemd-networkd[1449]: eth0: Gained carrier Feb 13 20:21:34.422097 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:21:34.423206 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:21:34.442006 systemd-networkd[1449]: eth0: DHCPv4 address 10.230.12.214/30, gateway 10.230.12.213 acquired from 10.230.12.213 Feb 13 20:21:34.444388 systemd-timesyncd[1422]: Network configuration changed, trying to establish connection. Feb 13 20:21:34.488006 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1451) Feb 13 20:21:34.569967 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 20:21:34.588865 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:21:34.608955 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:21:34.617384 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:21:34.626201 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:21:34.649867 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:21:34.669937 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Feb 13 20:21:34.678760 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Feb 13 20:21:34.679523 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Feb 13 20:21:34.695173 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Feb 13 20:21:34.792972 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:21:35.401659 systemd-resolved[1402]: Clock change detected. Flushing caches. Feb 13 20:21:35.402357 systemd-timesyncd[1422]: Contacted time server 85.199.214.98:123 (0.flatcar.pool.ntp.org). Feb 13 20:21:35.402733 systemd-timesyncd[1422]: Initial clock synchronization to Thu 2025-02-13 20:21:35.401573 UTC. Feb 13 20:21:35.416457 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:21:35.426809 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:21:35.501371 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:21:35.520590 lvm[1491]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:21:35.558692 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:21:35.559994 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:21:35.560820 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:21:35.561894 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:21:35.562808 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:21:35.564001 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:21:35.564892 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:21:35.565717 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:21:35.566464 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:21:35.566533 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:21:35.567171 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:21:35.569764 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:21:35.572820 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:21:35.578376 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Feb 13 20:21:35.579450 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Feb 13 20:21:35.580233 systemd[1]: Reached target ssh-access.target - SSH Access Available. Feb 13 20:21:35.593422 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:21:35.594811 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Feb 13 20:21:35.597588 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:21:35.599111 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:21:35.600108 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:21:35.600814 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:21:35.607893 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:21:35.607954 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:21:35.623752 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:21:35.628726 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:21:35.632335 lvm[1496]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:21:35.637385 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:21:35.649662 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:21:35.663375 jq[1500]: false Feb 13 20:21:35.663797 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:21:35.664710 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:21:35.671702 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:21:35.679738 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:21:35.684868 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:21:35.695730 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:21:35.699164 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:21:35.700132 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:21:35.705746 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:21:35.710695 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:21:35.715331 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:21:35.722166 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:21:35.722854 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:21:35.735954 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:21:35.736378 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:21:35.739103 dbus-daemon[1499]: [system] SELinux support is enabled Feb 13 20:21:35.739387 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:21:35.758589 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:21:35.758652 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:21:35.760658 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:21:35.760692 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:21:35.768391 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:21:35.771009 dbus-daemon[1499]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1449 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Feb 13 20:21:35.781527 extend-filesystems[1503]: Found loop4 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found loop5 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found loop6 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found loop7 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found vda Feb 13 20:21:35.781527 extend-filesystems[1503]: Found vda1 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found vda2 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found vda3 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found usr Feb 13 20:21:35.781527 extend-filesystems[1503]: Found vda4 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found vda6 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found vda7 Feb 13 20:21:35.781527 extend-filesystems[1503]: Found vda9 Feb 13 20:21:35.781527 extend-filesystems[1503]: Checking size of /dev/vda9 Feb 13 20:21:35.822189 jq[1510]: true Feb 13 20:21:35.785840 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Feb 13 20:21:35.799034 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:21:35.799734 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:21:35.831000 update_engine[1508]: I20250213 20:21:35.827922 1508 main.cc:92] Flatcar Update Engine starting Feb 13 20:21:35.842941 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:21:35.865680 update_engine[1508]: I20250213 20:21:35.847792 1508 update_check_scheduler.cc:74] Next update check in 2m11s Feb 13 20:21:35.865766 extend-filesystems[1503]: Resized partition /dev/vda9 Feb 13 20:21:35.866796 extend-filesystems[1535]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:21:35.871708 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:21:35.879965 jq[1528]: true Feb 13 20:21:35.882830 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Feb 13 20:21:36.015107 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (1451) Feb 13 20:21:36.075219 systemd-networkd[1449]: eth0: Gained IPv6LL Feb 13 20:21:36.090192 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:21:36.093589 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:21:36.103700 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:36.107359 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:21:36.172348 systemd-logind[1507]: Watching system buttons on /dev/input/event2 (Power Button) Feb 13 20:21:36.172397 systemd-logind[1507]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:21:36.184375 bash[1556]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:21:36.178659 systemd-logind[1507]: New seat seat0. Feb 13 20:21:36.186838 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:21:36.191729 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:21:36.206749 systemd[1]: Starting sshkeys.service... Feb 13 20:21:36.250796 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:21:36.274586 extend-filesystems[1535]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:21:36.274586 extend-filesystems[1535]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:21:36.274586 extend-filesystems[1535]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:21:36.295691 extend-filesystems[1503]: Resized filesystem in /dev/vda9 Feb 13 20:21:36.276058 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:21:36.276565 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:21:36.307393 sshd_keygen[1530]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:21:36.311193 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:21:36.331874 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:21:36.338515 containerd[1523]: time="2025-02-13T20:21:36.336003654Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 20:21:36.345205 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:21:36.372095 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Feb 13 20:21:36.375270 dbus-daemon[1499]: [system] Successfully activated service 'org.freedesktop.hostname1' Feb 13 20:21:36.382655 dbus-daemon[1499]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=1526 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Feb 13 20:21:36.392850 locksmithd[1536]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:21:36.395439 systemd[1]: Starting polkit.service - Authorization Manager... Feb 13 20:21:36.434291 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:21:36.440953 polkitd[1588]: Started polkitd version 121 Feb 13 20:21:36.446115 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:21:36.456708 polkitd[1588]: Loading rules from directory /etc/polkit-1/rules.d Feb 13 20:21:36.457068 polkitd[1588]: Loading rules from directory /usr/share/polkit-1/rules.d Feb 13 20:21:36.459522 polkitd[1588]: Finished loading, compiling and executing 2 rules Feb 13 20:21:36.461132 dbus-daemon[1499]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Feb 13 20:21:36.462692 systemd[1]: Started polkit.service - Authorization Manager. Feb 13 20:21:36.465412 polkitd[1588]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Feb 13 20:21:36.473842 containerd[1523]: time="2025-02-13T20:21:36.472756599Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:21:36.475660 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:21:36.476665 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:21:36.483871 containerd[1523]: time="2025-02-13T20:21:36.483792689Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:21:36.483871 containerd[1523]: time="2025-02-13T20:21:36.483839507Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:21:36.484788 containerd[1523]: time="2025-02-13T20:21:36.483886728Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:21:36.484788 containerd[1523]: time="2025-02-13T20:21:36.484305456Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:21:36.484788 containerd[1523]: time="2025-02-13T20:21:36.484342881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:21:36.484788 containerd[1523]: time="2025-02-13T20:21:36.484529316Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:21:36.484788 containerd[1523]: time="2025-02-13T20:21:36.484585587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:21:36.486427 containerd[1523]: time="2025-02-13T20:21:36.486118387Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:21:36.486427 containerd[1523]: time="2025-02-13T20:21:36.486186256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:21:36.486427 containerd[1523]: time="2025-02-13T20:21:36.486214880Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:21:36.486427 containerd[1523]: time="2025-02-13T20:21:36.486233780Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:21:36.488737 containerd[1523]: time="2025-02-13T20:21:36.487076774Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:21:36.488737 containerd[1523]: time="2025-02-13T20:21:36.487702003Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:21:36.488737 containerd[1523]: time="2025-02-13T20:21:36.488659377Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:21:36.488737 containerd[1523]: time="2025-02-13T20:21:36.488688665Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:21:36.489608 containerd[1523]: time="2025-02-13T20:21:36.488926205Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:21:36.489608 containerd[1523]: time="2025-02-13T20:21:36.489057987Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:21:36.491792 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:21:36.499306 containerd[1523]: time="2025-02-13T20:21:36.499256074Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:21:36.499423 containerd[1523]: time="2025-02-13T20:21:36.499377578Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:21:36.499423 containerd[1523]: time="2025-02-13T20:21:36.499407311Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:21:36.499588 containerd[1523]: time="2025-02-13T20:21:36.499431771Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:21:36.499588 containerd[1523]: time="2025-02-13T20:21:36.499456866Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.499722428Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500053068Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500339386Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500374733Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500404903Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500429417Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500462227Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500482277Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500556701Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500586861Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500620009Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500643134Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500661537Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:21:36.501610 containerd[1523]: time="2025-02-13T20:21:36.500711633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500738411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500758588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500779885Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500800200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500822321Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500843459Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500872214Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500894354Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500917583Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500937170Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500956704Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500975749Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.500998269Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.501028986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.502840 containerd[1523]: time="2025-02-13T20:21:36.501050779Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.501871 systemd-hostnamed[1526]: Hostname set to (static) Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501069280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501144456Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501220018Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501243949Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501265299Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501282122Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501326762Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501368664Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:21:36.511778 containerd[1523]: time="2025-02-13T20:21:36.501389644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:21:36.510757 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.502432860Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.503966016Z" level=info msg="Connect containerd service" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.504237175Z" level=info msg="using legacy CRI server" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.505229313Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.505943585Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.508370569Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509096358Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509193946Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509289074Z" level=info msg="Start subscribing containerd event" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509347483Z" level=info msg="Start recovering state" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509500427Z" level=info msg="Start event monitor" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509562485Z" level=info msg="Start snapshots syncer" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509585005Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509598883Z" level=info msg="Start streaming server" Feb 13 20:21:36.512405 containerd[1523]: time="2025-02-13T20:21:36.509703874Z" level=info msg="containerd successfully booted in 0.190655s" Feb 13 20:21:36.547332 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:21:36.556037 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:21:36.560293 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:21:36.561403 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:21:37.301829 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:37.314294 (kubelet)[1616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:21:37.582690 systemd-networkd[1449]: eth0: Ignoring DHCPv6 address 2a02:1348:179:8335:24:19ff:fee6:cd6/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:8335:24:19ff:fee6:cd6/64 assigned by NDisc. Feb 13 20:21:37.582707 systemd-networkd[1449]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Feb 13 20:21:37.917328 kubelet[1616]: E0213 20:21:37.916637 1616 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:21:37.919085 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:21:37.919377 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:21:37.920281 systemd[1]: kubelet.service: Consumed 1.055s CPU time, 254.9M memory peak. Feb 13 20:21:40.196044 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:21:40.201924 systemd[1]: Started sshd@0-10.230.12.214:22-139.178.89.65:58352.service - OpenSSH per-connection server daemon (139.178.89.65:58352). Feb 13 20:21:41.116623 sshd[1628]: Accepted publickey for core from 139.178.89.65 port 58352 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:21:41.119922 sshd-session[1628]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:41.140404 systemd-logind[1507]: New session 1 of user core. Feb 13 20:21:41.143608 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:21:41.157943 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:21:41.195765 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:21:41.205026 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:21:41.219400 (systemd)[1632]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:21:41.224362 systemd-logind[1507]: New session c1 of user core. Feb 13 20:21:41.420527 systemd[1632]: Queued start job for default target default.target. Feb 13 20:21:41.434797 systemd[1632]: Created slice app.slice - User Application Slice. Feb 13 20:21:41.434847 systemd[1632]: Reached target paths.target - Paths. Feb 13 20:21:41.434927 systemd[1632]: Reached target timers.target - Timers. Feb 13 20:21:41.437213 systemd[1632]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:21:41.452973 systemd[1632]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:21:41.454231 systemd[1632]: Reached target sockets.target - Sockets. Feb 13 20:21:41.454455 systemd[1632]: Reached target basic.target - Basic System. Feb 13 20:21:41.454696 systemd[1632]: Reached target default.target - Main User Target. Feb 13 20:21:41.454752 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:21:41.454963 systemd[1632]: Startup finished in 218ms. Feb 13 20:21:41.461987 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:21:41.692820 login[1609]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:21:41.698768 login[1608]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 13 20:21:41.702263 systemd-logind[1507]: New session 2 of user core. Feb 13 20:21:41.708757 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:21:41.713021 systemd-logind[1507]: New session 3 of user core. Feb 13 20:21:41.723809 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:21:42.099058 systemd[1]: Started sshd@1-10.230.12.214:22-139.178.89.65:58368.service - OpenSSH per-connection server daemon (139.178.89.65:58368). Feb 13 20:21:42.756040 coreos-metadata[1498]: Feb 13 20:21:42.755 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:21:42.782456 coreos-metadata[1498]: Feb 13 20:21:42.782 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Feb 13 20:21:42.788668 coreos-metadata[1498]: Feb 13 20:21:42.788 INFO Fetch failed with 404: resource not found Feb 13 20:21:42.788668 coreos-metadata[1498]: Feb 13 20:21:42.788 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Feb 13 20:21:42.789903 coreos-metadata[1498]: Feb 13 20:21:42.789 INFO Fetch successful Feb 13 20:21:42.789903 coreos-metadata[1498]: Feb 13 20:21:42.789 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Feb 13 20:21:42.802133 coreos-metadata[1498]: Feb 13 20:21:42.802 INFO Fetch successful Feb 13 20:21:42.802133 coreos-metadata[1498]: Feb 13 20:21:42.802 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Feb 13 20:21:42.816918 coreos-metadata[1498]: Feb 13 20:21:42.816 INFO Fetch successful Feb 13 20:21:42.816918 coreos-metadata[1498]: Feb 13 20:21:42.816 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Feb 13 20:21:42.832801 coreos-metadata[1498]: Feb 13 20:21:42.832 INFO Fetch successful Feb 13 20:21:42.832801 coreos-metadata[1498]: Feb 13 20:21:42.832 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Feb 13 20:21:42.854555 coreos-metadata[1498]: Feb 13 20:21:42.853 INFO Fetch successful Feb 13 20:21:42.876419 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:21:42.879083 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:21:42.983951 sshd[1669]: Accepted publickey for core from 139.178.89.65 port 58368 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:21:42.986088 sshd-session[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:42.994013 systemd-logind[1507]: New session 4 of user core. Feb 13 20:21:43.002873 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:21:43.462056 coreos-metadata[1585]: Feb 13 20:21:43.461 WARN failed to locate config-drive, using the metadata service API instead Feb 13 20:21:43.484460 coreos-metadata[1585]: Feb 13 20:21:43.484 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Feb 13 20:21:43.509261 coreos-metadata[1585]: Feb 13 20:21:43.509 INFO Fetch successful Feb 13 20:21:43.509534 coreos-metadata[1585]: Feb 13 20:21:43.509 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Feb 13 20:21:43.541683 coreos-metadata[1585]: Feb 13 20:21:43.541 INFO Fetch successful Feb 13 20:21:43.543676 unknown[1585]: wrote ssh authorized keys file for user: core Feb 13 20:21:43.567124 update-ssh-keys[1683]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:21:43.567906 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:21:43.570371 systemd[1]: Finished sshkeys.service. Feb 13 20:21:43.573704 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:21:43.574589 systemd[1]: Startup finished in 1.428s (kernel) + 15.962s (initrd) + 12.413s (userspace) = 29.804s. Feb 13 20:21:43.598557 sshd[1678]: Connection closed by 139.178.89.65 port 58368 Feb 13 20:21:43.599562 sshd-session[1669]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:43.603852 systemd[1]: sshd@1-10.230.12.214:22-139.178.89.65:58368.service: Deactivated successfully. Feb 13 20:21:43.606597 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:21:43.608825 systemd-logind[1507]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:21:43.610753 systemd-logind[1507]: Removed session 4. Feb 13 20:21:43.764920 systemd[1]: Started sshd@2-10.230.12.214:22-139.178.89.65:58376.service - OpenSSH per-connection server daemon (139.178.89.65:58376). Feb 13 20:21:44.650649 sshd[1690]: Accepted publickey for core from 139.178.89.65 port 58376 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:21:44.652879 sshd-session[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:44.661181 systemd-logind[1507]: New session 5 of user core. Feb 13 20:21:44.667728 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:21:45.266345 sshd[1692]: Connection closed by 139.178.89.65 port 58376 Feb 13 20:21:45.267347 sshd-session[1690]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:45.273454 systemd-logind[1507]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:21:45.274444 systemd[1]: sshd@2-10.230.12.214:22-139.178.89.65:58376.service: Deactivated successfully. Feb 13 20:21:45.277086 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:21:45.278777 systemd-logind[1507]: Removed session 5. Feb 13 20:21:47.999678 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:21:48.010793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:48.195712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:48.197330 (kubelet)[1705]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:21:48.280443 kubelet[1705]: E0213 20:21:48.280105 1705 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:21:48.285715 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:21:48.285993 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:21:48.286924 systemd[1]: kubelet.service: Consumed 235ms CPU time, 103.4M memory peak. Feb 13 20:21:55.428940 systemd[1]: Started sshd@3-10.230.12.214:22-139.178.89.65:33428.service - OpenSSH per-connection server daemon (139.178.89.65:33428). Feb 13 20:21:56.323319 sshd[1713]: Accepted publickey for core from 139.178.89.65 port 33428 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:21:56.325599 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:56.336197 systemd-logind[1507]: New session 6 of user core. Feb 13 20:21:56.340811 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:21:56.942726 sshd[1715]: Connection closed by 139.178.89.65 port 33428 Feb 13 20:21:56.944549 sshd-session[1713]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:56.950342 systemd[1]: sshd@3-10.230.12.214:22-139.178.89.65:33428.service: Deactivated successfully. Feb 13 20:21:56.952883 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:21:56.954030 systemd-logind[1507]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:21:56.955889 systemd-logind[1507]: Removed session 6. Feb 13 20:21:57.108926 systemd[1]: Started sshd@4-10.230.12.214:22-139.178.89.65:33444.service - OpenSSH per-connection server daemon (139.178.89.65:33444). Feb 13 20:21:58.006138 sshd[1721]: Accepted publickey for core from 139.178.89.65 port 33444 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:21:58.008467 sshd-session[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:58.018641 systemd-logind[1507]: New session 7 of user core. Feb 13 20:21:58.021798 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:21:58.499620 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 13 20:21:58.508824 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:21:58.623426 sshd[1723]: Connection closed by 139.178.89.65 port 33444 Feb 13 20:21:58.624451 sshd-session[1721]: pam_unix(sshd:session): session closed for user core Feb 13 20:21:58.628919 systemd[1]: sshd@4-10.230.12.214:22-139.178.89.65:33444.service: Deactivated successfully. Feb 13 20:21:58.632730 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:21:58.635726 systemd-logind[1507]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:21:58.643571 systemd-logind[1507]: Removed session 7. Feb 13 20:21:58.662610 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:21:58.677227 (kubelet)[1736]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:21:58.759581 kubelet[1736]: E0213 20:21:58.757245 1736 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:21:58.760061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:21:58.760317 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:21:58.760985 systemd[1]: kubelet.service: Consumed 205ms CPU time, 103.2M memory peak. Feb 13 20:21:58.785979 systemd[1]: Started sshd@5-10.230.12.214:22-139.178.89.65:33458.service - OpenSSH per-connection server daemon (139.178.89.65:33458). Feb 13 20:21:59.675104 sshd[1744]: Accepted publickey for core from 139.178.89.65 port 33458 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:21:59.677008 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:21:59.684732 systemd-logind[1507]: New session 8 of user core. Feb 13 20:21:59.695722 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:22:00.291945 sshd[1746]: Connection closed by 139.178.89.65 port 33458 Feb 13 20:22:00.292861 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:00.296723 systemd[1]: sshd@5-10.230.12.214:22-139.178.89.65:33458.service: Deactivated successfully. Feb 13 20:22:00.299120 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:22:00.301273 systemd-logind[1507]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:22:00.302858 systemd-logind[1507]: Removed session 8. Feb 13 20:22:00.451285 systemd[1]: Started sshd@6-10.230.12.214:22-139.178.89.65:33468.service - OpenSSH per-connection server daemon (139.178.89.65:33468). Feb 13 20:22:01.351513 sshd[1752]: Accepted publickey for core from 139.178.89.65 port 33468 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:22:01.353586 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:01.362385 systemd-logind[1507]: New session 9 of user core. Feb 13 20:22:01.369752 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:22:01.858090 sudo[1755]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:22:01.858606 sudo[1755]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:22:01.873278 sudo[1755]: pam_unix(sudo:session): session closed for user root Feb 13 20:22:02.016593 sshd[1754]: Connection closed by 139.178.89.65 port 33468 Feb 13 20:22:02.017713 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:02.022610 systemd[1]: sshd@6-10.230.12.214:22-139.178.89.65:33468.service: Deactivated successfully. Feb 13 20:22:02.025287 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:22:02.027364 systemd-logind[1507]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:22:02.028828 systemd-logind[1507]: Removed session 9. Feb 13 20:22:02.186885 systemd[1]: Started sshd@7-10.230.12.214:22-139.178.89.65:33482.service - OpenSSH per-connection server daemon (139.178.89.65:33482). Feb 13 20:22:03.079522 sshd[1761]: Accepted publickey for core from 139.178.89.65 port 33482 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:22:03.081440 sshd-session[1761]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:03.088949 systemd-logind[1507]: New session 10 of user core. Feb 13 20:22:03.102707 systemd[1]: Started session-10.scope - Session 10 of User core. Feb 13 20:22:03.557883 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:22:03.558338 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:22:03.563522 sudo[1765]: pam_unix(sudo:session): session closed for user root Feb 13 20:22:03.571547 sudo[1764]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 20:22:03.572382 sudo[1764]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:22:03.592036 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 20:22:03.630735 augenrules[1787]: No rules Feb 13 20:22:03.631879 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:22:03.632243 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 20:22:03.633845 sudo[1764]: pam_unix(sudo:session): session closed for user root Feb 13 20:22:03.777946 sshd[1763]: Connection closed by 139.178.89.65 port 33482 Feb 13 20:22:03.778960 sshd-session[1761]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:03.783622 systemd[1]: sshd@7-10.230.12.214:22-139.178.89.65:33482.service: Deactivated successfully. Feb 13 20:22:03.787011 systemd[1]: session-10.scope: Deactivated successfully. Feb 13 20:22:03.789739 systemd-logind[1507]: Session 10 logged out. Waiting for processes to exit. Feb 13 20:22:03.791640 systemd-logind[1507]: Removed session 10. Feb 13 20:22:03.943938 systemd[1]: Started sshd@8-10.230.12.214:22-139.178.89.65:33488.service - OpenSSH per-connection server daemon (139.178.89.65:33488). Feb 13 20:22:04.839745 sshd[1796]: Accepted publickey for core from 139.178.89.65 port 33488 ssh2: RSA SHA256:1d/NPWzJh4p1csN6rw9jx6l57+TZuIaUuHeQZhkXldk Feb 13 20:22:04.841937 sshd-session[1796]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:22:04.849240 systemd-logind[1507]: New session 11 of user core. Feb 13 20:22:04.855746 systemd[1]: Started session-11.scope - Session 11 of User core. Feb 13 20:22:05.319495 sudo[1799]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:22:05.320024 sudo[1799]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:22:06.024373 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:22:06.024785 systemd[1]: kubelet.service: Consumed 205ms CPU time, 103.2M memory peak. Feb 13 20:22:06.035840 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:22:06.071690 systemd[1]: Reload requested from client PID 1832 ('systemctl') (unit session-11.scope)... Feb 13 20:22:06.071748 systemd[1]: Reloading... Feb 13 20:22:06.240554 zram_generator::config[1881]: No configuration found. Feb 13 20:22:06.421719 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:22:06.576133 systemd[1]: Reloading finished in 503 ms. Feb 13 20:22:06.645279 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:22:06.654924 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:22:06.656159 systemd[1]: kubelet.service: Deactivated successfully. Feb 13 20:22:06.656608 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:22:06.656678 systemd[1]: kubelet.service: Consumed 121ms CPU time, 91.6M memory peak. Feb 13 20:22:06.659112 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:22:06.802702 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:22:06.814193 (kubelet)[1947]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:22:06.888517 kubelet[1947]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:22:06.888517 kubelet[1947]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:22:06.888517 kubelet[1947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:22:06.888517 kubelet[1947]: I0213 20:22:06.887707 1947 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:22:07.606512 kubelet[1947]: I0213 20:22:07.606436 1947 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:22:07.608515 kubelet[1947]: I0213 20:22:07.606739 1947 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:22:07.608515 kubelet[1947]: I0213 20:22:07.607153 1947 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:22:07.628154 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Feb 13 20:22:07.640068 kubelet[1947]: I0213 20:22:07.640028 1947 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:22:07.666286 kubelet[1947]: E0213 20:22:07.666213 1947 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:22:07.666286 kubelet[1947]: I0213 20:22:07.666272 1947 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:22:07.672975 kubelet[1947]: I0213 20:22:07.672936 1947 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:22:07.673477 kubelet[1947]: I0213 20:22:07.673417 1947 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:22:07.673776 kubelet[1947]: I0213 20:22:07.673474 1947 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.12.214","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:22:07.674023 kubelet[1947]: I0213 20:22:07.673797 1947 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:22:07.674023 kubelet[1947]: I0213 20:22:07.673814 1947 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:22:07.674137 kubelet[1947]: I0213 20:22:07.674046 1947 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:22:07.678988 kubelet[1947]: I0213 20:22:07.678951 1947 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:22:07.678988 kubelet[1947]: I0213 20:22:07.678983 1947 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:22:07.679121 kubelet[1947]: I0213 20:22:07.679023 1947 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:22:07.679121 kubelet[1947]: I0213 20:22:07.679044 1947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:22:07.680338 kubelet[1947]: E0213 20:22:07.679681 1947 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:07.680338 kubelet[1947]: E0213 20:22:07.680038 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:07.682295 kubelet[1947]: I0213 20:22:07.682218 1947 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 20:22:07.684052 kubelet[1947]: I0213 20:22:07.683091 1947 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:22:07.684052 kubelet[1947]: W0213 20:22:07.683200 1947 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:22:07.686125 kubelet[1947]: I0213 20:22:07.686094 1947 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:22:07.686273 kubelet[1947]: I0213 20:22:07.686254 1947 server.go:1287] "Started kubelet" Feb 13 20:22:07.689698 kubelet[1947]: I0213 20:22:07.689079 1947 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:22:07.691049 kubelet[1947]: I0213 20:22:07.690703 1947 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:22:07.691297 kubelet[1947]: I0213 20:22:07.691201 1947 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:22:07.691972 kubelet[1947]: I0213 20:22:07.691946 1947 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:22:07.694189 kubelet[1947]: I0213 20:22:07.694164 1947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:22:07.695614 kubelet[1947]: I0213 20:22:07.695589 1947 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:22:07.703402 kubelet[1947]: E0213 20:22:07.701902 1947 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.12.214.1823de24174c7b26 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.12.214,UID:10.230.12.214,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.12.214,},FirstTimestamp:2025-02-13 20:22:07.686220582 +0000 UTC m=+0.867034231,LastTimestamp:2025-02-13 20:22:07.686220582 +0000 UTC m=+0.867034231,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.12.214,}" Feb 13 20:22:07.703839 kubelet[1947]: W0213 20:22:07.703806 1947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.12.214" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 20:22:07.704017 kubelet[1947]: E0213 20:22:07.703987 1947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.230.12.214\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 20:22:07.704275 kubelet[1947]: W0213 20:22:07.704248 1947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 20:22:07.704475 kubelet[1947]: E0213 20:22:07.704377 1947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 20:22:07.704766 kubelet[1947]: I0213 20:22:07.704737 1947 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:22:07.705038 kubelet[1947]: E0213 20:22:07.705004 1947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.12.214\" not found" Feb 13 20:22:07.705137 kubelet[1947]: I0213 20:22:07.705117 1947 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:22:07.705230 kubelet[1947]: I0213 20:22:07.705208 1947 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:22:07.706948 kubelet[1947]: E0213 20:22:07.706320 1947 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:22:07.706948 kubelet[1947]: I0213 20:22:07.706324 1947 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:22:07.706948 kubelet[1947]: I0213 20:22:07.706770 1947 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:22:07.708899 kubelet[1947]: I0213 20:22:07.708868 1947 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:22:07.710788 kubelet[1947]: E0213 20:22:07.709945 1947 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.230.12.214\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 20:22:07.711666 kubelet[1947]: W0213 20:22:07.710407 1947 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 20:22:07.719508 kubelet[1947]: E0213 20:22:07.718552 1947 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 20:22:07.735030 kubelet[1947]: E0213 20:22:07.734354 1947 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.12.214.1823de24187eecb0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.12.214,UID:10.230.12.214,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:10.230.12.214,},FirstTimestamp:2025-02-13 20:22:07.706303664 +0000 UTC m=+0.887117303,LastTimestamp:2025-02-13 20:22:07.706303664 +0000 UTC m=+0.887117303,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.12.214,}" Feb 13 20:22:07.743127 kubelet[1947]: I0213 20:22:07.743100 1947 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:22:07.743289 kubelet[1947]: I0213 20:22:07.743266 1947 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:22:07.743445 kubelet[1947]: I0213 20:22:07.743427 1947 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:22:07.747838 kubelet[1947]: I0213 20:22:07.747810 1947 policy_none.go:49] "None policy: Start" Feb 13 20:22:07.748638 kubelet[1947]: I0213 20:22:07.748613 1947 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:22:07.748810 kubelet[1947]: I0213 20:22:07.748790 1947 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:22:07.764286 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:22:07.778083 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:22:07.786165 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:22:07.794408 kubelet[1947]: I0213 20:22:07.794361 1947 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:22:07.794770 kubelet[1947]: I0213 20:22:07.794745 1947 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:22:07.794865 kubelet[1947]: I0213 20:22:07.794783 1947 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:22:07.796190 kubelet[1947]: I0213 20:22:07.796070 1947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:22:07.799083 kubelet[1947]: E0213 20:22:07.798929 1947 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:22:07.799083 kubelet[1947]: E0213 20:22:07.799008 1947 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.12.214\" not found" Feb 13 20:22:07.803923 kubelet[1947]: I0213 20:22:07.803827 1947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:22:07.805501 kubelet[1947]: I0213 20:22:07.805455 1947 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:22:07.805593 kubelet[1947]: I0213 20:22:07.805541 1947 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:22:07.805593 kubelet[1947]: I0213 20:22:07.805587 1947 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:22:07.805711 kubelet[1947]: I0213 20:22:07.805605 1947 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:22:07.805959 kubelet[1947]: E0213 20:22:07.805784 1947 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 20:22:07.896760 kubelet[1947]: I0213 20:22:07.895996 1947 kubelet_node_status.go:76] "Attempting to register node" node="10.230.12.214" Feb 13 20:22:07.902428 kubelet[1947]: I0213 20:22:07.902229 1947 kubelet_node_status.go:79] "Successfully registered node" node="10.230.12.214" Feb 13 20:22:07.902428 kubelet[1947]: E0213 20:22:07.902279 1947 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"10.230.12.214\": node \"10.230.12.214\" not found" Feb 13 20:22:07.908719 kubelet[1947]: E0213 20:22:07.908681 1947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.12.214\" not found" Feb 13 20:22:08.009138 kubelet[1947]: E0213 20:22:08.009081 1947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.12.214\" not found" Feb 13 20:22:08.109888 kubelet[1947]: E0213 20:22:08.109837 1947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.12.214\" not found" Feb 13 20:22:08.210346 kubelet[1947]: E0213 20:22:08.210160 1947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.12.214\" not found" Feb 13 20:22:08.311361 kubelet[1947]: E0213 20:22:08.311284 1947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.12.214\" not found" Feb 13 20:22:08.340180 sudo[1799]: pam_unix(sudo:session): session closed for user root Feb 13 20:22:08.412412 kubelet[1947]: E0213 20:22:08.412337 1947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.12.214\" not found" Feb 13 20:22:08.485618 sshd[1798]: Connection closed by 139.178.89.65 port 33488 Feb 13 20:22:08.484814 sshd-session[1796]: pam_unix(sshd:session): session closed for user core Feb 13 20:22:08.489202 systemd[1]: sshd@8-10.230.12.214:22-139.178.89.65:33488.service: Deactivated successfully. Feb 13 20:22:08.492473 systemd[1]: session-11.scope: Deactivated successfully. Feb 13 20:22:08.493022 systemd[1]: session-11.scope: Consumed 573ms CPU time, 73M memory peak. Feb 13 20:22:08.496433 systemd-logind[1507]: Session 11 logged out. Waiting for processes to exit. Feb 13 20:22:08.498189 systemd-logind[1507]: Removed session 11. Feb 13 20:22:08.513029 kubelet[1947]: E0213 20:22:08.512995 1947 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"10.230.12.214\" not found" Feb 13 20:22:08.609901 kubelet[1947]: I0213 20:22:08.609834 1947 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 20:22:08.610156 kubelet[1947]: W0213 20:22:08.610055 1947 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:22:08.610156 kubelet[1947]: W0213 20:22:08.610119 1947 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:22:08.615239 kubelet[1947]: I0213 20:22:08.615194 1947 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 20:22:08.615895 containerd[1523]: time="2025-02-13T20:22:08.615732960Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:22:08.617130 kubelet[1947]: I0213 20:22:08.616570 1947 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 20:22:08.680896 kubelet[1947]: E0213 20:22:08.680808 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:08.680896 kubelet[1947]: I0213 20:22:08.680906 1947 apiserver.go:52] "Watching apiserver" Feb 13 20:22:08.703200 kubelet[1947]: E0213 20:22:08.702810 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:08.705772 kubelet[1947]: I0213 20:22:08.705741 1947 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:22:08.710411 kubelet[1947]: I0213 20:22:08.710377 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618-registration-dir\") pod \"csi-node-driver-spcjx\" (UID: \"7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618\") " pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:08.710474 kubelet[1947]: I0213 20:22:08.710444 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx7tf\" (UniqueName: \"kubernetes.io/projected/c74a45a7-3961-42ed-8b56-d3a38d3e997e-kube-api-access-xx7tf\") pod \"kube-proxy-gznc2\" (UID: \"c74a45a7-3961-42ed-8b56-d3a38d3e997e\") " pod="kube-system/kube-proxy-gznc2" Feb 13 20:22:08.710862 kubelet[1947]: I0213 20:22:08.710479 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4d94cc22-e7b9-4e3c-911a-510d58f8f32d-tigera-ca-bundle\") pod \"calico-typha-7774b65485-bhgpv\" (UID: \"4d94cc22-e7b9-4e3c-911a-510d58f8f32d\") " pod="calico-system/calico-typha-7774b65485-bhgpv" Feb 13 20:22:08.710862 kubelet[1947]: I0213 20:22:08.710712 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-var-run-calico\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.710862 kubelet[1947]: I0213 20:22:08.710803 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-var-lib-calico\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.711021 kubelet[1947]: I0213 20:22:08.710882 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-flexvol-driver-host\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.711021 kubelet[1947]: I0213 20:22:08.710912 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618-varrun\") pod \"csi-node-driver-spcjx\" (UID: \"7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618\") " pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:08.711021 kubelet[1947]: I0213 20:22:08.711003 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618-socket-dir\") pod \"csi-node-driver-spcjx\" (UID: \"7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618\") " pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:08.711945 kubelet[1947]: I0213 20:22:08.711115 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6c7vr\" (UniqueName: \"kubernetes.io/projected/2dbad755-0d77-4a3b-bab3-66599b578d72-kube-api-access-6c7vr\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.711945 kubelet[1947]: I0213 20:22:08.711207 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-cni-net-dir\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.711945 kubelet[1947]: I0213 20:22:08.711273 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fp67s\" (UniqueName: \"kubernetes.io/projected/4d94cc22-e7b9-4e3c-911a-510d58f8f32d-kube-api-access-fp67s\") pod \"calico-typha-7774b65485-bhgpv\" (UID: \"4d94cc22-e7b9-4e3c-911a-510d58f8f32d\") " pod="calico-system/calico-typha-7774b65485-bhgpv" Feb 13 20:22:08.711945 kubelet[1947]: I0213 20:22:08.711305 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-xtables-lock\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.711945 kubelet[1947]: I0213 20:22:08.711370 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c74a45a7-3961-42ed-8b56-d3a38d3e997e-lib-modules\") pod \"kube-proxy-gznc2\" (UID: \"c74a45a7-3961-42ed-8b56-d3a38d3e997e\") " pod="kube-system/kube-proxy-gznc2" Feb 13 20:22:08.714755 kubelet[1947]: I0213 20:22:08.711447 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4d94cc22-e7b9-4e3c-911a-510d58f8f32d-typha-certs\") pod \"calico-typha-7774b65485-bhgpv\" (UID: \"4d94cc22-e7b9-4e3c-911a-510d58f8f32d\") " pod="calico-system/calico-typha-7774b65485-bhgpv" Feb 13 20:22:08.714755 kubelet[1947]: I0213 20:22:08.711531 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-policysync\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.714755 kubelet[1947]: I0213 20:22:08.711576 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dbad755-0d77-4a3b-bab3-66599b578d72-tigera-ca-bundle\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.714755 kubelet[1947]: I0213 20:22:08.711659 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/2dbad755-0d77-4a3b-bab3-66599b578d72-node-certs\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.714755 kubelet[1947]: I0213 20:22:08.711756 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618-kubelet-dir\") pod \"csi-node-driver-spcjx\" (UID: \"7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618\") " pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:08.712613 systemd[1]: Created slice kubepods-besteffort-pod2dbad755_0d77_4a3b_bab3_66599b578d72.slice - libcontainer container kubepods-besteffort-pod2dbad755_0d77_4a3b_bab3_66599b578d72.slice. Feb 13 20:22:08.715286 kubelet[1947]: I0213 20:22:08.711785 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c74a45a7-3961-42ed-8b56-d3a38d3e997e-kube-proxy\") pod \"kube-proxy-gznc2\" (UID: \"c74a45a7-3961-42ed-8b56-d3a38d3e997e\") " pod="kube-system/kube-proxy-gznc2" Feb 13 20:22:08.715286 kubelet[1947]: I0213 20:22:08.711932 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-lib-modules\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.715286 kubelet[1947]: I0213 20:22:08.711995 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-cni-bin-dir\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.715286 kubelet[1947]: I0213 20:22:08.712024 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/2dbad755-0d77-4a3b-bab3-66599b578d72-cni-log-dir\") pod \"calico-node-gsq5d\" (UID: \"2dbad755-0d77-4a3b-bab3-66599b578d72\") " pod="calico-system/calico-node-gsq5d" Feb 13 20:22:08.715286 kubelet[1947]: I0213 20:22:08.712064 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zvm\" (UniqueName: \"kubernetes.io/projected/7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618-kube-api-access-v7zvm\") pod \"csi-node-driver-spcjx\" (UID: \"7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618\") " pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:08.715539 kubelet[1947]: I0213 20:22:08.712121 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c74a45a7-3961-42ed-8b56-d3a38d3e997e-xtables-lock\") pod \"kube-proxy-gznc2\" (UID: \"c74a45a7-3961-42ed-8b56-d3a38d3e997e\") " pod="kube-system/kube-proxy-gznc2" Feb 13 20:22:08.733932 systemd[1]: Created slice kubepods-besteffort-pod4d94cc22_e7b9_4e3c_911a_510d58f8f32d.slice - libcontainer container kubepods-besteffort-pod4d94cc22_e7b9_4e3c_911a_510d58f8f32d.slice. Feb 13 20:22:08.753237 systemd[1]: Created slice kubepods-besteffort-podc74a45a7_3961_42ed_8b56_d3a38d3e997e.slice - libcontainer container kubepods-besteffort-podc74a45a7_3961_42ed_8b56_d3a38d3e997e.slice. Feb 13 20:22:08.822064 kubelet[1947]: E0213 20:22:08.822027 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.822064 kubelet[1947]: W0213 20:22:08.822056 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.822248 kubelet[1947]: E0213 20:22:08.822088 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.822363 kubelet[1947]: E0213 20:22:08.822338 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.822363 kubelet[1947]: W0213 20:22:08.822359 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.822509 kubelet[1947]: E0213 20:22:08.822375 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.822652 kubelet[1947]: E0213 20:22:08.822629 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.822652 kubelet[1947]: W0213 20:22:08.822649 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.822802 kubelet[1947]: E0213 20:22:08.822665 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.823010 kubelet[1947]: E0213 20:22:08.822982 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.823010 kubelet[1947]: W0213 20:22:08.823003 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.823135 kubelet[1947]: E0213 20:22:08.823020 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.824521 kubelet[1947]: E0213 20:22:08.823288 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.824521 kubelet[1947]: W0213 20:22:08.823309 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.824521 kubelet[1947]: E0213 20:22:08.823325 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.824521 kubelet[1947]: E0213 20:22:08.823613 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.824521 kubelet[1947]: W0213 20:22:08.823626 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.824521 kubelet[1947]: E0213 20:22:08.823641 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.824521 kubelet[1947]: E0213 20:22:08.823900 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.824521 kubelet[1947]: W0213 20:22:08.823914 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.824521 kubelet[1947]: E0213 20:22:08.823929 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.824521 kubelet[1947]: E0213 20:22:08.824195 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.824930 kubelet[1947]: W0213 20:22:08.824210 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.824930 kubelet[1947]: E0213 20:22:08.824225 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.824930 kubelet[1947]: E0213 20:22:08.824680 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.824930 kubelet[1947]: W0213 20:22:08.824733 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.824930 kubelet[1947]: E0213 20:22:08.824753 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.827738 kubelet[1947]: E0213 20:22:08.825019 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.827738 kubelet[1947]: W0213 20:22:08.825033 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.827738 kubelet[1947]: E0213 20:22:08.825048 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.850772 kubelet[1947]: E0213 20:22:08.850629 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.850772 kubelet[1947]: W0213 20:22:08.850665 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.850772 kubelet[1947]: E0213 20:22:08.850689 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.858534 kubelet[1947]: E0213 20:22:08.856728 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.858534 kubelet[1947]: W0213 20:22:08.856751 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.858534 kubelet[1947]: E0213 20:22:08.856769 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.866659 kubelet[1947]: E0213 20:22:08.865845 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.866659 kubelet[1947]: W0213 20:22:08.865866 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.866659 kubelet[1947]: E0213 20:22:08.865885 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:08.870399 kubelet[1947]: E0213 20:22:08.870372 1947 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:22:08.870399 kubelet[1947]: W0213 20:22:08.870395 1947 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:22:08.870558 kubelet[1947]: E0213 20:22:08.870412 1947 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:22:09.032829 containerd[1523]: time="2025-02-13T20:22:09.032005893Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gsq5d,Uid:2dbad755-0d77-4a3b-bab3-66599b578d72,Namespace:calico-system,Attempt:0,}" Feb 13 20:22:09.042549 containerd[1523]: time="2025-02-13T20:22:09.042029708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7774b65485-bhgpv,Uid:4d94cc22-e7b9-4e3c-911a-510d58f8f32d,Namespace:calico-system,Attempt:0,}" Feb 13 20:22:09.056989 containerd[1523]: time="2025-02-13T20:22:09.056940512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gznc2,Uid:c74a45a7-3961-42ed-8b56-d3a38d3e997e,Namespace:kube-system,Attempt:0,}" Feb 13 20:22:09.681804 kubelet[1947]: E0213 20:22:09.681516 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:09.896195 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2009107000.mount: Deactivated successfully. Feb 13 20:22:09.912969 containerd[1523]: time="2025-02-13T20:22:09.911607606Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:22:09.913704 containerd[1523]: time="2025-02-13T20:22:09.913659035Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:22:09.915794 containerd[1523]: time="2025-02-13T20:22:09.915745053Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Feb 13 20:22:09.916445 containerd[1523]: time="2025-02-13T20:22:09.916412564Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:22:09.918630 containerd[1523]: time="2025-02-13T20:22:09.918582952Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:22:09.919970 containerd[1523]: time="2025-02-13T20:22:09.919885310Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:22:09.920849 containerd[1523]: time="2025-02-13T20:22:09.920462060Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:22:09.926536 containerd[1523]: time="2025-02-13T20:22:09.926412906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:22:09.928753 containerd[1523]: time="2025-02-13T20:22:09.927778207Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 885.610583ms" Feb 13 20:22:09.930418 containerd[1523]: time="2025-02-13T20:22:09.930341114Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 873.309462ms" Feb 13 20:22:09.935025 containerd[1523]: time="2025-02-13T20:22:09.934702757Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 902.451235ms" Feb 13 20:22:10.142796 containerd[1523]: time="2025-02-13T20:22:10.142634075Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:10.143175 containerd[1523]: time="2025-02-13T20:22:10.143050287Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:10.143175 containerd[1523]: time="2025-02-13T20:22:10.143125139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:10.143567 containerd[1523]: time="2025-02-13T20:22:10.143503482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:10.145882 containerd[1523]: time="2025-02-13T20:22:10.139762627Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:10.146017 containerd[1523]: time="2025-02-13T20:22:10.145866729Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:10.146017 containerd[1523]: time="2025-02-13T20:22:10.145898543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:10.146623 containerd[1523]: time="2025-02-13T20:22:10.146273177Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:10.146623 containerd[1523]: time="2025-02-13T20:22:10.146336767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:10.146623 containerd[1523]: time="2025-02-13T20:22:10.146356528Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:10.146623 containerd[1523]: time="2025-02-13T20:22:10.146499317Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:10.146880 containerd[1523]: time="2025-02-13T20:22:10.146139272Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:10.256178 systemd[1]: Started cri-containerd-4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2.scope - libcontainer container 4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2. Feb 13 20:22:10.266683 systemd[1]: Started cri-containerd-6b959bb4126aa7de338b5361c036011acfde99abdfe21e79a899052da386df40.scope - libcontainer container 6b959bb4126aa7de338b5361c036011acfde99abdfe21e79a899052da386df40. Feb 13 20:22:10.269856 systemd[1]: Started cri-containerd-ff1e695dd4d8f0e0549a55639e4d3a3b6d051cbd4637f24d247d02f43dcceba9.scope - libcontainer container ff1e695dd4d8f0e0549a55639e4d3a3b6d051cbd4637f24d247d02f43dcceba9. Feb 13 20:22:10.317734 containerd[1523]: time="2025-02-13T20:22:10.317577352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-gsq5d,Uid:2dbad755-0d77-4a3b-bab3-66599b578d72,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2\"" Feb 13 20:22:10.325836 containerd[1523]: time="2025-02-13T20:22:10.325215721Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:22:10.344318 containerd[1523]: time="2025-02-13T20:22:10.343836546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-gznc2,Uid:c74a45a7-3961-42ed-8b56-d3a38d3e997e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff1e695dd4d8f0e0549a55639e4d3a3b6d051cbd4637f24d247d02f43dcceba9\"" Feb 13 20:22:10.363920 containerd[1523]: time="2025-02-13T20:22:10.363871344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7774b65485-bhgpv,Uid:4d94cc22-e7b9-4e3c-911a-510d58f8f32d,Namespace:calico-system,Attempt:0,} returns sandbox id \"6b959bb4126aa7de338b5361c036011acfde99abdfe21e79a899052da386df40\"" Feb 13 20:22:10.682413 kubelet[1947]: E0213 20:22:10.681951 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:10.806865 kubelet[1947]: E0213 20:22:10.806799 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:11.682741 kubelet[1947]: E0213 20:22:11.682676 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:11.691525 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2726819364.mount: Deactivated successfully. Feb 13 20:22:11.850694 containerd[1523]: time="2025-02-13T20:22:11.850634959Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:11.851828 containerd[1523]: time="2025-02-13T20:22:11.851777838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 20:22:11.852798 containerd[1523]: time="2025-02-13T20:22:11.852714865Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:11.855599 containerd[1523]: time="2025-02-13T20:22:11.855560048Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:11.857329 containerd[1523]: time="2025-02-13T20:22:11.856639218Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.531353585s" Feb 13 20:22:11.857329 containerd[1523]: time="2025-02-13T20:22:11.856684034Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:22:11.858569 containerd[1523]: time="2025-02-13T20:22:11.858530514Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:22:11.859913 containerd[1523]: time="2025-02-13T20:22:11.859863284Z" level=info msg="CreateContainer within sandbox \"4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:22:11.892824 containerd[1523]: time="2025-02-13T20:22:11.892736272Z" level=info msg="CreateContainer within sandbox \"4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936\"" Feb 13 20:22:11.894109 containerd[1523]: time="2025-02-13T20:22:11.893958594Z" level=info msg="StartContainer for \"471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936\"" Feb 13 20:22:11.948765 systemd[1]: Started cri-containerd-471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936.scope - libcontainer container 471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936. Feb 13 20:22:11.993361 containerd[1523]: time="2025-02-13T20:22:11.993268500Z" level=info msg="StartContainer for \"471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936\" returns successfully" Feb 13 20:22:12.011236 systemd[1]: cri-containerd-471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936.scope: Deactivated successfully. Feb 13 20:22:12.046518 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936-rootfs.mount: Deactivated successfully. Feb 13 20:22:12.107420 containerd[1523]: time="2025-02-13T20:22:12.107297729Z" level=info msg="shim disconnected" id=471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936 namespace=k8s.io Feb 13 20:22:12.107420 containerd[1523]: time="2025-02-13T20:22:12.107382412Z" level=warning msg="cleaning up after shim disconnected" id=471f5c0749a50a956a38a56bba9402b08f87b86cee26418025633fda00e79936 namespace=k8s.io Feb 13 20:22:12.107420 containerd[1523]: time="2025-02-13T20:22:12.107400455Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:22:12.683837 kubelet[1947]: E0213 20:22:12.683762 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:12.806856 kubelet[1947]: E0213 20:22:12.806778 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:13.547672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount889549390.mount: Deactivated successfully. Feb 13 20:22:13.684648 kubelet[1947]: E0213 20:22:13.684583 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:14.297041 containerd[1523]: time="2025-02-13T20:22:14.296979249Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:14.298241 containerd[1523]: time="2025-02-13T20:22:14.298180790Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908847" Feb 13 20:22:14.299128 containerd[1523]: time="2025-02-13T20:22:14.299070350Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:14.302752 containerd[1523]: time="2025-02-13T20:22:14.302688777Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:14.303957 containerd[1523]: time="2025-02-13T20:22:14.303803678Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.444665358s" Feb 13 20:22:14.303957 containerd[1523]: time="2025-02-13T20:22:14.303846632Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 20:22:14.306272 containerd[1523]: time="2025-02-13T20:22:14.306150170Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Feb 13 20:22:14.307297 containerd[1523]: time="2025-02-13T20:22:14.307061485Z" level=info msg="CreateContainer within sandbox \"ff1e695dd4d8f0e0549a55639e4d3a3b6d051cbd4637f24d247d02f43dcceba9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:22:14.326158 containerd[1523]: time="2025-02-13T20:22:14.326030461Z" level=info msg="CreateContainer within sandbox \"ff1e695dd4d8f0e0549a55639e4d3a3b6d051cbd4637f24d247d02f43dcceba9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8b966cc8326beba7a507b127e379908d32d156d8786ecd1da0b09c8fbf967199\"" Feb 13 20:22:14.327007 containerd[1523]: time="2025-02-13T20:22:14.326909977Z" level=info msg="StartContainer for \"8b966cc8326beba7a507b127e379908d32d156d8786ecd1da0b09c8fbf967199\"" Feb 13 20:22:14.368130 systemd[1]: run-containerd-runc-k8s.io-8b966cc8326beba7a507b127e379908d32d156d8786ecd1da0b09c8fbf967199-runc.3yyn3w.mount: Deactivated successfully. Feb 13 20:22:14.375669 systemd[1]: Started cri-containerd-8b966cc8326beba7a507b127e379908d32d156d8786ecd1da0b09c8fbf967199.scope - libcontainer container 8b966cc8326beba7a507b127e379908d32d156d8786ecd1da0b09c8fbf967199. Feb 13 20:22:14.417629 containerd[1523]: time="2025-02-13T20:22:14.417437196Z" level=info msg="StartContainer for \"8b966cc8326beba7a507b127e379908d32d156d8786ecd1da0b09c8fbf967199\" returns successfully" Feb 13 20:22:14.685305 kubelet[1947]: E0213 20:22:14.685126 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:14.806800 kubelet[1947]: E0213 20:22:14.806174 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:15.685461 kubelet[1947]: E0213 20:22:15.685407 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:16.703972 kubelet[1947]: E0213 20:22:16.703701 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:16.709604 containerd[1523]: time="2025-02-13T20:22:16.709550568Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:16.710642 containerd[1523]: time="2025-02-13T20:22:16.710566376Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Feb 13 20:22:16.711521 containerd[1523]: time="2025-02-13T20:22:16.711443798Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:16.714519 containerd[1523]: time="2025-02-13T20:22:16.714339527Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:16.716125 containerd[1523]: time="2025-02-13T20:22:16.715415645Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 2.409222708s" Feb 13 20:22:16.716125 containerd[1523]: time="2025-02-13T20:22:16.715457101Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Feb 13 20:22:16.716665 containerd[1523]: time="2025-02-13T20:22:16.716635678Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:22:16.731816 containerd[1523]: time="2025-02-13T20:22:16.730473155Z" level=info msg="CreateContainer within sandbox \"6b959bb4126aa7de338b5361c036011acfde99abdfe21e79a899052da386df40\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Feb 13 20:22:16.750660 containerd[1523]: time="2025-02-13T20:22:16.750619815Z" level=info msg="CreateContainer within sandbox \"6b959bb4126aa7de338b5361c036011acfde99abdfe21e79a899052da386df40\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"f89563aa1ff3346224f10249f601cdd3d938427399b21a8c2cec34ac742b275d\"" Feb 13 20:22:16.751308 containerd[1523]: time="2025-02-13T20:22:16.751247521Z" level=info msg="StartContainer for \"f89563aa1ff3346224f10249f601cdd3d938427399b21a8c2cec34ac742b275d\"" Feb 13 20:22:16.791799 systemd[1]: Started cri-containerd-f89563aa1ff3346224f10249f601cdd3d938427399b21a8c2cec34ac742b275d.scope - libcontainer container f89563aa1ff3346224f10249f601cdd3d938427399b21a8c2cec34ac742b275d. Feb 13 20:22:16.807933 kubelet[1947]: E0213 20:22:16.807037 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:16.850431 containerd[1523]: time="2025-02-13T20:22:16.850384628Z" level=info msg="StartContainer for \"f89563aa1ff3346224f10249f601cdd3d938427399b21a8c2cec34ac742b275d\" returns successfully" Feb 13 20:22:17.704332 kubelet[1947]: E0213 20:22:17.704209 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:17.880102 kubelet[1947]: I0213 20:22:17.880001 1947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7774b65485-bhgpv" podStartSLOduration=7.529277711 podStartE2EDuration="13.87996028s" podCreationTimestamp="2025-02-13 20:22:04 +0000 UTC" firstStartedPulling="2025-02-13 20:22:10.365764537 +0000 UTC m=+3.546578168" lastFinishedPulling="2025-02-13 20:22:16.716447098 +0000 UTC m=+9.897260737" observedRunningTime="2025-02-13 20:22:17.878252525 +0000 UTC m=+11.059066206" watchObservedRunningTime="2025-02-13 20:22:17.87996028 +0000 UTC m=+11.060773925" Feb 13 20:22:17.880563 kubelet[1947]: I0213 20:22:17.880210 1947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-gznc2" podStartSLOduration=6.921897141 podStartE2EDuration="10.880201457s" podCreationTimestamp="2025-02-13 20:22:07 +0000 UTC" firstStartedPulling="2025-02-13 20:22:10.347050666 +0000 UTC m=+3.527864292" lastFinishedPulling="2025-02-13 20:22:14.30535497 +0000 UTC m=+7.486168608" observedRunningTime="2025-02-13 20:22:14.844577119 +0000 UTC m=+8.025390769" watchObservedRunningTime="2025-02-13 20:22:17.880201457 +0000 UTC m=+11.061015108" Feb 13 20:22:18.706529 kubelet[1947]: E0213 20:22:18.704650 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:18.806986 kubelet[1947]: E0213 20:22:18.806932 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:18.851981 kubelet[1947]: I0213 20:22:18.851943 1947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:22:19.705252 kubelet[1947]: E0213 20:22:19.705200 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:19.953780 systemd[1]: Started sshd@9-10.230.12.214:22-92.255.85.189:31946.service - OpenSSH per-connection server daemon (92.255.85.189:31946). Feb 13 20:22:20.432034 sshd[2434]: Invalid user nutanix from 92.255.85.189 port 31946 Feb 13 20:22:20.500681 sshd[2434]: Connection closed by invalid user nutanix 92.255.85.189 port 31946 [preauth] Feb 13 20:22:20.503747 systemd[1]: sshd@9-10.230.12.214:22-92.255.85.189:31946.service: Deactivated successfully. Feb 13 20:22:20.706960 kubelet[1947]: E0213 20:22:20.705981 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:20.772001 update_engine[1508]: I20250213 20:22:20.771304 1508 update_attempter.cc:509] Updating boot flags... Feb 13 20:22:20.807079 kubelet[1947]: E0213 20:22:20.806164 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:20.868526 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2447) Feb 13 20:22:21.008815 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 39 scanned by (udev-worker) (2445) Feb 13 20:22:21.707457 kubelet[1947]: E0213 20:22:21.707407 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:21.851220 containerd[1523]: time="2025-02-13T20:22:21.849290555Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:21.851220 containerd[1523]: time="2025-02-13T20:22:21.850746888Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:21.851220 containerd[1523]: time="2025-02-13T20:22:21.850807065Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:22:21.853766 containerd[1523]: time="2025-02-13T20:22:21.853729069Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:21.855237 containerd[1523]: time="2025-02-13T20:22:21.855189050Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.137566147s" Feb 13 20:22:21.855316 containerd[1523]: time="2025-02-13T20:22:21.855249440Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:22:21.859038 containerd[1523]: time="2025-02-13T20:22:21.858998138Z" level=info msg="CreateContainer within sandbox \"4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:22:21.876691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066769793.mount: Deactivated successfully. Feb 13 20:22:21.881512 containerd[1523]: time="2025-02-13T20:22:21.881310160Z" level=info msg="CreateContainer within sandbox \"4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e\"" Feb 13 20:22:21.882223 containerd[1523]: time="2025-02-13T20:22:21.882194013Z" level=info msg="StartContainer for \"1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e\"" Feb 13 20:22:21.924279 systemd[1]: run-containerd-runc-k8s.io-1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e-runc.wIwjb6.mount: Deactivated successfully. Feb 13 20:22:21.941731 systemd[1]: Started cri-containerd-1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e.scope - libcontainer container 1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e. Feb 13 20:22:21.984183 containerd[1523]: time="2025-02-13T20:22:21.983615445Z" level=info msg="StartContainer for \"1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e\" returns successfully" Feb 13 20:22:21.996862 kubelet[1947]: I0213 20:22:21.996173 1947 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 13 20:22:22.707828 kubelet[1947]: E0213 20:22:22.707757 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:22.806399 kubelet[1947]: E0213 20:22:22.805920 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:22.854636 systemd[1]: cri-containerd-1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e.scope: Deactivated successfully. Feb 13 20:22:22.855707 systemd[1]: cri-containerd-1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e.scope: Consumed 602ms CPU time, 168.1M memory peak, 151M written to disk. Feb 13 20:22:22.887553 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e-rootfs.mount: Deactivated successfully. Feb 13 20:22:22.926861 kubelet[1947]: I0213 20:22:22.926810 1947 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 20:22:23.054937 containerd[1523]: time="2025-02-13T20:22:23.054587612Z" level=info msg="shim disconnected" id=1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e namespace=k8s.io Feb 13 20:22:23.054937 containerd[1523]: time="2025-02-13T20:22:23.054698675Z" level=warning msg="cleaning up after shim disconnected" id=1f84060427c29f211891e3419729ed572c1282aef8742bf7d047d12da550f72e namespace=k8s.io Feb 13 20:22:23.054937 containerd[1523]: time="2025-02-13T20:22:23.054717332Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:22:23.708755 kubelet[1947]: E0213 20:22:23.708647 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:23.874827 containerd[1523]: time="2025-02-13T20:22:23.874774844Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:22:24.709922 kubelet[1947]: E0213 20:22:24.709805 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:24.814842 systemd[1]: Created slice kubepods-besteffort-pod7bcc9fed_b4b7_4b0f_9dc0_2e0587f01618.slice - libcontainer container kubepods-besteffort-pod7bcc9fed_b4b7_4b0f_9dc0_2e0587f01618.slice. Feb 13 20:22:24.819348 containerd[1523]: time="2025-02-13T20:22:24.818735879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:0,}" Feb 13 20:22:24.912850 containerd[1523]: time="2025-02-13T20:22:24.912786129Z" level=error msg="Failed to destroy network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:24.914113 containerd[1523]: time="2025-02-13T20:22:24.914068440Z" level=error msg="encountered an error cleaning up failed sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:24.916094 containerd[1523]: time="2025-02-13T20:22:24.914271949Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:24.915689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb-shm.mount: Deactivated successfully. Feb 13 20:22:24.917045 kubelet[1947]: E0213 20:22:24.916728 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:24.917152 kubelet[1947]: E0213 20:22:24.917109 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:24.917217 kubelet[1947]: E0213 20:22:24.917159 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:24.917709 kubelet[1947]: E0213 20:22:24.917281 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:25.711070 kubelet[1947]: E0213 20:22:25.710978 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:25.879670 kubelet[1947]: I0213 20:22:25.879601 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb" Feb 13 20:22:25.883610 containerd[1523]: time="2025-02-13T20:22:25.880677416Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:25.883610 containerd[1523]: time="2025-02-13T20:22:25.881120448Z" level=info msg="Ensure that sandbox d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb in task-service has been cleanup successfully" Feb 13 20:22:25.884412 containerd[1523]: time="2025-02-13T20:22:25.884272226Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:25.884648 containerd[1523]: time="2025-02-13T20:22:25.884516367Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:25.885430 systemd[1]: run-netns-cni\x2d604c862b\x2d56b1\x2dca08\x2d7b15\x2d1941aa9fa6ea.mount: Deactivated successfully. Feb 13 20:22:25.888248 containerd[1523]: time="2025-02-13T20:22:25.887837827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:1,}" Feb 13 20:22:26.008206 containerd[1523]: time="2025-02-13T20:22:26.008134021Z" level=error msg="Failed to destroy network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:26.010214 containerd[1523]: time="2025-02-13T20:22:26.008889327Z" level=error msg="encountered an error cleaning up failed sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:26.010214 containerd[1523]: time="2025-02-13T20:22:26.008983226Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:26.011563 kubelet[1947]: E0213 20:22:26.010727 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:26.011563 kubelet[1947]: E0213 20:22:26.010825 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:26.011563 kubelet[1947]: E0213 20:22:26.010862 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:26.011952 kubelet[1947]: E0213 20:22:26.010918 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:26.012990 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7-shm.mount: Deactivated successfully. Feb 13 20:22:26.711708 kubelet[1947]: E0213 20:22:26.711455 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:26.887654 kubelet[1947]: I0213 20:22:26.885831 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7" Feb 13 20:22:26.888826 containerd[1523]: time="2025-02-13T20:22:26.887235026Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:26.888826 containerd[1523]: time="2025-02-13T20:22:26.887662400Z" level=info msg="Ensure that sandbox 08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7 in task-service has been cleanup successfully" Feb 13 20:22:26.888826 containerd[1523]: time="2025-02-13T20:22:26.888632004Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:26.888826 containerd[1523]: time="2025-02-13T20:22:26.888679119Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:26.892182 containerd[1523]: time="2025-02-13T20:22:26.890668334Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:26.892182 containerd[1523]: time="2025-02-13T20:22:26.890788251Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:26.892182 containerd[1523]: time="2025-02-13T20:22:26.890808794Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:26.892088 systemd[1]: run-netns-cni\x2d770bb405\x2df56f\x2d84af\x2d3374\x2d5e45c35b6b44.mount: Deactivated successfully. Feb 13 20:22:26.893408 containerd[1523]: time="2025-02-13T20:22:26.892780434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:2,}" Feb 13 20:22:27.004042 containerd[1523]: time="2025-02-13T20:22:27.003972534Z" level=error msg="Failed to destroy network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:27.007947 containerd[1523]: time="2025-02-13T20:22:27.005358536Z" level=error msg="encountered an error cleaning up failed sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:27.007947 containerd[1523]: time="2025-02-13T20:22:27.005468671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:27.007168 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13-shm.mount: Deactivated successfully. Feb 13 20:22:27.008862 kubelet[1947]: E0213 20:22:27.008473 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:27.009262 kubelet[1947]: E0213 20:22:27.009078 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:27.009560 kubelet[1947]: E0213 20:22:27.009518 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:27.010236 kubelet[1947]: E0213 20:22:27.009826 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:27.066047 systemd[1]: Created slice kubepods-besteffort-podaeec0dc4_c55c_4364_849e_c8dfcadc1aa1.slice - libcontainer container kubepods-besteffort-podaeec0dc4_c55c_4364_849e_c8dfcadc1aa1.slice. Feb 13 20:22:27.135768 kubelet[1947]: I0213 20:22:27.135672 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6gd4\" (UniqueName: \"kubernetes.io/projected/aeec0dc4-c55c-4364-849e-c8dfcadc1aa1-kube-api-access-j6gd4\") pod \"nginx-deployment-7fcdb87857-rq72x\" (UID: \"aeec0dc4-c55c-4364-849e-c8dfcadc1aa1\") " pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:27.371534 containerd[1523]: time="2025-02-13T20:22:27.371252182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:0,}" Feb 13 20:22:27.461466 containerd[1523]: time="2025-02-13T20:22:27.461380630Z" level=error msg="Failed to destroy network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:27.462930 containerd[1523]: time="2025-02-13T20:22:27.462318810Z" level=error msg="encountered an error cleaning up failed sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:27.462930 containerd[1523]: time="2025-02-13T20:22:27.462433022Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:27.463174 kubelet[1947]: E0213 20:22:27.462990 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:27.463174 kubelet[1947]: E0213 20:22:27.463109 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:27.463174 kubelet[1947]: E0213 20:22:27.463155 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:27.463347 kubelet[1947]: E0213 20:22:27.463241 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rq72x" podUID="aeec0dc4-c55c-4364-849e-c8dfcadc1aa1" Feb 13 20:22:27.680903 kubelet[1947]: E0213 20:22:27.680189 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:27.711951 kubelet[1947]: E0213 20:22:27.711871 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:27.895643 kubelet[1947]: I0213 20:22:27.894556 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13" Feb 13 20:22:27.896007 containerd[1523]: time="2025-02-13T20:22:27.895945594Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:22:27.896677 containerd[1523]: time="2025-02-13T20:22:27.896309654Z" level=info msg="Ensure that sandbox 04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13 in task-service has been cleanup successfully" Feb 13 20:22:27.899844 kubelet[1947]: I0213 20:22:27.899392 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4" Feb 13 20:22:27.899962 containerd[1523]: time="2025-02-13T20:22:27.899552055Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:22:27.899962 containerd[1523]: time="2025-02-13T20:22:27.899602217Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:22:27.900300 systemd[1]: run-netns-cni\x2d046d1b96\x2d6a8f\x2dd017\x2dae1a\x2de28566f56906.mount: Deactivated successfully. Feb 13 20:22:27.901860 containerd[1523]: time="2025-02-13T20:22:27.901419287Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:22:27.901860 containerd[1523]: time="2025-02-13T20:22:27.901693710Z" level=info msg="Ensure that sandbox 3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4 in task-service has been cleanup successfully" Feb 13 20:22:27.902882 containerd[1523]: time="2025-02-13T20:22:27.902231971Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:27.902882 containerd[1523]: time="2025-02-13T20:22:27.902333123Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:27.902882 containerd[1523]: time="2025-02-13T20:22:27.902352530Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:27.903036 containerd[1523]: time="2025-02-13T20:22:27.902969526Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:27.904523 containerd[1523]: time="2025-02-13T20:22:27.903081268Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:27.904523 containerd[1523]: time="2025-02-13T20:22:27.903108091Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:27.904523 containerd[1523]: time="2025-02-13T20:22:27.903582295Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:22:27.904523 containerd[1523]: time="2025-02-13T20:22:27.903603683Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:22:27.904523 containerd[1523]: time="2025-02-13T20:22:27.904092482Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:3,}" Feb 13 20:22:27.906571 systemd[1]: run-netns-cni\x2d15bdb49d\x2d2120\x2d6d1e\x2d4360\x2da514659438c6.mount: Deactivated successfully. Feb 13 20:22:27.907337 containerd[1523]: time="2025-02-13T20:22:27.906937799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:1,}" Feb 13 20:22:28.324451 containerd[1523]: time="2025-02-13T20:22:28.324337445Z" level=error msg="Failed to destroy network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:28.325548 containerd[1523]: time="2025-02-13T20:22:28.324940487Z" level=error msg="encountered an error cleaning up failed sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:28.325548 containerd[1523]: time="2025-02-13T20:22:28.325039613Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:28.326265 kubelet[1947]: E0213 20:22:28.325461 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:28.326265 kubelet[1947]: E0213 20:22:28.326034 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:28.326265 kubelet[1947]: E0213 20:22:28.326077 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:28.326549 kubelet[1947]: E0213 20:22:28.326156 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:28.331730 containerd[1523]: time="2025-02-13T20:22:28.331352227Z" level=error msg="Failed to destroy network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:28.332173 containerd[1523]: time="2025-02-13T20:22:28.331993818Z" level=error msg="encountered an error cleaning up failed sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:28.332173 containerd[1523]: time="2025-02-13T20:22:28.332086444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:28.332451 kubelet[1947]: E0213 20:22:28.332378 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:28.332905 kubelet[1947]: E0213 20:22:28.332458 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:28.332905 kubelet[1947]: E0213 20:22:28.332657 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:28.332905 kubelet[1947]: E0213 20:22:28.332742 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rq72x" podUID="aeec0dc4-c55c-4364-849e-c8dfcadc1aa1" Feb 13 20:22:28.713877 kubelet[1947]: E0213 20:22:28.712771 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:28.892758 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9-shm.mount: Deactivated successfully. Feb 13 20:22:28.907474 kubelet[1947]: I0213 20:22:28.907193 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9" Feb 13 20:22:28.910005 kubelet[1947]: I0213 20:22:28.909974 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7" Feb 13 20:22:28.910302 containerd[1523]: time="2025-02-13T20:22:28.910107496Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:22:28.912333 containerd[1523]: time="2025-02-13T20:22:28.912302445Z" level=info msg="Ensure that sandbox 45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9 in task-service has been cleanup successfully" Feb 13 20:22:28.917751 containerd[1523]: time="2025-02-13T20:22:28.917292819Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:22:28.917751 containerd[1523]: time="2025-02-13T20:22:28.917542630Z" level=info msg="Ensure that sandbox 51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7 in task-service has been cleanup successfully" Feb 13 20:22:28.920069 systemd[1]: run-netns-cni\x2dd540e24f\x2ddc7f\x2da312\x2d2d83\x2dd8a28cb27e73.mount: Deactivated successfully. Feb 13 20:22:28.921323 containerd[1523]: time="2025-02-13T20:22:28.921281125Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:22:28.921410 containerd[1523]: time="2025-02-13T20:22:28.921319820Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:22:28.922553 containerd[1523]: time="2025-02-13T20:22:28.922522781Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:22:28.922928 containerd[1523]: time="2025-02-13T20:22:28.922810339Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:22:28.924517 containerd[1523]: time="2025-02-13T20:22:28.924058041Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:22:28.924517 containerd[1523]: time="2025-02-13T20:22:28.924180575Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:22:28.924517 containerd[1523]: time="2025-02-13T20:22:28.924202722Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:22:28.924517 containerd[1523]: time="2025-02-13T20:22:28.924271354Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:22:28.924517 containerd[1523]: time="2025-02-13T20:22:28.924381893Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:22:28.924517 containerd[1523]: time="2025-02-13T20:22:28.924400710Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:22:28.926155 systemd[1]: run-netns-cni\x2d555967b7\x2d0a2e\x2d4a72\x2d0ab5\x2d27bbe827f8c7.mount: Deactivated successfully. Feb 13 20:22:28.927320 containerd[1523]: time="2025-02-13T20:22:28.926464062Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:2,}" Feb 13 20:22:28.935955 containerd[1523]: time="2025-02-13T20:22:28.935908390Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:28.937803 containerd[1523]: time="2025-02-13T20:22:28.936046714Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:28.937803 containerd[1523]: time="2025-02-13T20:22:28.936066826Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:28.937803 containerd[1523]: time="2025-02-13T20:22:28.936826572Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:28.937803 containerd[1523]: time="2025-02-13T20:22:28.936956362Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:28.937803 containerd[1523]: time="2025-02-13T20:22:28.936980166Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:28.943878 containerd[1523]: time="2025-02-13T20:22:28.943818007Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:4,}" Feb 13 20:22:29.095719 containerd[1523]: time="2025-02-13T20:22:29.095647854Z" level=error msg="Failed to destroy network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:29.096991 containerd[1523]: time="2025-02-13T20:22:29.096944088Z" level=error msg="encountered an error cleaning up failed sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:29.097683 containerd[1523]: time="2025-02-13T20:22:29.097637124Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:29.098762 kubelet[1947]: E0213 20:22:29.098080 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:29.098762 kubelet[1947]: E0213 20:22:29.098189 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:29.098762 kubelet[1947]: E0213 20:22:29.098228 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:29.099009 kubelet[1947]: E0213 20:22:29.098296 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rq72x" podUID="aeec0dc4-c55c-4364-849e-c8dfcadc1aa1" Feb 13 20:22:29.137647 containerd[1523]: time="2025-02-13T20:22:29.137558849Z" level=error msg="Failed to destroy network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:29.138379 containerd[1523]: time="2025-02-13T20:22:29.138343946Z" level=error msg="encountered an error cleaning up failed sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:29.138679 containerd[1523]: time="2025-02-13T20:22:29.138558086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:29.139175 kubelet[1947]: E0213 20:22:29.139116 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:29.139473 kubelet[1947]: E0213 20:22:29.139408 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:29.141276 kubelet[1947]: E0213 20:22:29.140440 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:29.141276 kubelet[1947]: E0213 20:22:29.140664 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:29.713752 kubelet[1947]: E0213 20:22:29.713638 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:29.892327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2-shm.mount: Deactivated successfully. Feb 13 20:22:29.923322 kubelet[1947]: I0213 20:22:29.923051 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6" Feb 13 20:22:29.925380 containerd[1523]: time="2025-02-13T20:22:29.924904609Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:22:29.926467 containerd[1523]: time="2025-02-13T20:22:29.925843025Z" level=info msg="Ensure that sandbox d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6 in task-service has been cleanup successfully" Feb 13 20:22:29.929008 containerd[1523]: time="2025-02-13T20:22:29.928957082Z" level=info msg="TearDown network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" successfully" Feb 13 20:22:29.929008 containerd[1523]: time="2025-02-13T20:22:29.929005864Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" returns successfully" Feb 13 20:22:29.930185 containerd[1523]: time="2025-02-13T20:22:29.929864390Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:22:29.930471 systemd[1]: run-netns-cni\x2d84656589\x2d7bea\x2de188\x2d609e\x2df31a0a0b9656.mount: Deactivated successfully. Feb 13 20:22:29.931197 containerd[1523]: time="2025-02-13T20:22:29.930740176Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:22:29.931197 containerd[1523]: time="2025-02-13T20:22:29.930761636Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:22:29.933802 containerd[1523]: time="2025-02-13T20:22:29.931917276Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:22:29.933802 containerd[1523]: time="2025-02-13T20:22:29.932071097Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:22:29.933802 containerd[1523]: time="2025-02-13T20:22:29.932129027Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:22:29.934003 containerd[1523]: time="2025-02-13T20:22:29.933824396Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:29.934003 containerd[1523]: time="2025-02-13T20:22:29.933925387Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:29.934003 containerd[1523]: time="2025-02-13T20:22:29.933995950Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:29.936026 containerd[1523]: time="2025-02-13T20:22:29.935984693Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:29.936128 containerd[1523]: time="2025-02-13T20:22:29.936112736Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:29.936182 containerd[1523]: time="2025-02-13T20:22:29.936135563Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:29.938604 kubelet[1947]: I0213 20:22:29.938328 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2" Feb 13 20:22:29.938722 containerd[1523]: time="2025-02-13T20:22:29.938460150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:5,}" Feb 13 20:22:29.941265 containerd[1523]: time="2025-02-13T20:22:29.940785467Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:22:29.941265 containerd[1523]: time="2025-02-13T20:22:29.941062466Z" level=info msg="Ensure that sandbox 3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2 in task-service has been cleanup successfully" Feb 13 20:22:29.943029 containerd[1523]: time="2025-02-13T20:22:29.942905503Z" level=info msg="TearDown network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" successfully" Feb 13 20:22:29.943029 containerd[1523]: time="2025-02-13T20:22:29.942954522Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" returns successfully" Feb 13 20:22:29.946052 systemd[1]: run-netns-cni\x2d8dbbc0ae\x2d5546\x2dc603\x2df95e\x2d145abd902d63.mount: Deactivated successfully. Feb 13 20:22:29.948418 containerd[1523]: time="2025-02-13T20:22:29.946347770Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:22:29.948418 containerd[1523]: time="2025-02-13T20:22:29.946524219Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:22:29.948418 containerd[1523]: time="2025-02-13T20:22:29.946615377Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:22:29.950744 containerd[1523]: time="2025-02-13T20:22:29.950712303Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:22:29.950875 containerd[1523]: time="2025-02-13T20:22:29.950849794Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:22:29.950955 containerd[1523]: time="2025-02-13T20:22:29.950877284Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:22:29.952536 containerd[1523]: time="2025-02-13T20:22:29.952504048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:3,}" Feb 13 20:22:30.114323 containerd[1523]: time="2025-02-13T20:22:30.113904558Z" level=error msg="Failed to destroy network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:30.115615 containerd[1523]: time="2025-02-13T20:22:30.115405504Z" level=error msg="encountered an error cleaning up failed sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:30.117340 containerd[1523]: time="2025-02-13T20:22:30.117298526Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:30.118584 kubelet[1947]: E0213 20:22:30.117805 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:30.118584 kubelet[1947]: E0213 20:22:30.117943 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:30.118584 kubelet[1947]: E0213 20:22:30.117982 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:30.118819 kubelet[1947]: E0213 20:22:30.118082 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:30.156663 containerd[1523]: time="2025-02-13T20:22:30.156388103Z" level=error msg="Failed to destroy network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:30.158901 containerd[1523]: time="2025-02-13T20:22:30.158666027Z" level=error msg="encountered an error cleaning up failed sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:30.158901 containerd[1523]: time="2025-02-13T20:22:30.158756926Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:30.160186 kubelet[1947]: E0213 20:22:30.159700 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:30.160186 kubelet[1947]: E0213 20:22:30.159811 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:30.160186 kubelet[1947]: E0213 20:22:30.159852 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:30.161762 kubelet[1947]: E0213 20:22:30.160445 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rq72x" podUID="aeec0dc4-c55c-4364-849e-c8dfcadc1aa1" Feb 13 20:22:30.714670 kubelet[1947]: E0213 20:22:30.714339 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:30.893865 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84-shm.mount: Deactivated successfully. Feb 13 20:22:30.947696 kubelet[1947]: I0213 20:22:30.947643 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84" Feb 13 20:22:30.949944 containerd[1523]: time="2025-02-13T20:22:30.949410667Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" Feb 13 20:22:30.949944 containerd[1523]: time="2025-02-13T20:22:30.949766005Z" level=info msg="Ensure that sandbox f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84 in task-service has been cleanup successfully" Feb 13 20:22:30.950667 containerd[1523]: time="2025-02-13T20:22:30.950637400Z" level=info msg="TearDown network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" successfully" Feb 13 20:22:30.952288 containerd[1523]: time="2025-02-13T20:22:30.950768093Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" returns successfully" Feb 13 20:22:30.953180 containerd[1523]: time="2025-02-13T20:22:30.953149051Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:22:30.954157 containerd[1523]: time="2025-02-13T20:22:30.953362735Z" level=info msg="TearDown network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" successfully" Feb 13 20:22:30.954076 systemd[1]: run-netns-cni\x2d54540896\x2d80ec\x2d3ca2\x2d11ce\x2d882171ae331c.mount: Deactivated successfully. Feb 13 20:22:30.954827 containerd[1523]: time="2025-02-13T20:22:30.954718958Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" returns successfully" Feb 13 20:22:30.955450 containerd[1523]: time="2025-02-13T20:22:30.955105583Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:22:30.955450 containerd[1523]: time="2025-02-13T20:22:30.955247332Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:22:30.955450 containerd[1523]: time="2025-02-13T20:22:30.955267695Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:22:30.957567 containerd[1523]: time="2025-02-13T20:22:30.957400463Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:22:30.957849 containerd[1523]: time="2025-02-13T20:22:30.957795542Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:22:30.959237 containerd[1523]: time="2025-02-13T20:22:30.957949586Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:22:30.959237 containerd[1523]: time="2025-02-13T20:22:30.958697198Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:30.959237 containerd[1523]: time="2025-02-13T20:22:30.958806403Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:30.959237 containerd[1523]: time="2025-02-13T20:22:30.958826599Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:30.959445 containerd[1523]: time="2025-02-13T20:22:30.959403713Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:30.959563 containerd[1523]: time="2025-02-13T20:22:30.959538515Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:30.959692 containerd[1523]: time="2025-02-13T20:22:30.959566689Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:30.961403 containerd[1523]: time="2025-02-13T20:22:30.961358996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:6,}" Feb 13 20:22:30.965165 kubelet[1947]: I0213 20:22:30.965014 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224" Feb 13 20:22:30.971657 containerd[1523]: time="2025-02-13T20:22:30.971592479Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" Feb 13 20:22:30.974752 containerd[1523]: time="2025-02-13T20:22:30.971867422Z" level=info msg="Ensure that sandbox a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224 in task-service has been cleanup successfully" Feb 13 20:22:30.974752 containerd[1523]: time="2025-02-13T20:22:30.972089221Z" level=info msg="TearDown network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" successfully" Feb 13 20:22:30.974752 containerd[1523]: time="2025-02-13T20:22:30.972110392Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" returns successfully" Feb 13 20:22:30.974752 containerd[1523]: time="2025-02-13T20:22:30.974397822Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:22:30.974752 containerd[1523]: time="2025-02-13T20:22:30.974514111Z" level=info msg="TearDown network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" successfully" Feb 13 20:22:30.974752 containerd[1523]: time="2025-02-13T20:22:30.974534866Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" returns successfully" Feb 13 20:22:30.975107 systemd[1]: run-netns-cni\x2ddb23c4a1\x2da283\x2d5152\x2d5d98\x2d2f45c2d8c506.mount: Deactivated successfully. Feb 13 20:22:30.977062 containerd[1523]: time="2025-02-13T20:22:30.976508581Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:22:30.977062 containerd[1523]: time="2025-02-13T20:22:30.976626184Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:22:30.977062 containerd[1523]: time="2025-02-13T20:22:30.976646207Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:22:30.982510 containerd[1523]: time="2025-02-13T20:22:30.979646260Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:22:30.982510 containerd[1523]: time="2025-02-13T20:22:30.979755362Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:22:30.982510 containerd[1523]: time="2025-02-13T20:22:30.979776984Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:22:30.982510 containerd[1523]: time="2025-02-13T20:22:30.981523719Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:4,}" Feb 13 20:22:31.197264 containerd[1523]: time="2025-02-13T20:22:31.197130870Z" level=error msg="Failed to destroy network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:31.201211 containerd[1523]: time="2025-02-13T20:22:31.201102704Z" level=error msg="encountered an error cleaning up failed sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:31.201630 containerd[1523]: time="2025-02-13T20:22:31.201581320Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:31.202818 kubelet[1947]: E0213 20:22:31.202746 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:31.202913 kubelet[1947]: E0213 20:22:31.202852 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:31.202913 kubelet[1947]: E0213 20:22:31.202890 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:31.203197 kubelet[1947]: E0213 20:22:31.202966 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rq72x" podUID="aeec0dc4-c55c-4364-849e-c8dfcadc1aa1" Feb 13 20:22:31.216346 containerd[1523]: time="2025-02-13T20:22:31.216089465Z" level=error msg="Failed to destroy network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:31.217144 containerd[1523]: time="2025-02-13T20:22:31.216882194Z" level=error msg="encountered an error cleaning up failed sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:31.217144 containerd[1523]: time="2025-02-13T20:22:31.217001881Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:31.217630 kubelet[1947]: E0213 20:22:31.217473 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:31.217630 kubelet[1947]: E0213 20:22:31.217602 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:31.218219 kubelet[1947]: E0213 20:22:31.217637 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:31.218219 kubelet[1947]: E0213 20:22:31.217713 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:31.462411 systemd[1]: Started sshd@10-10.230.12.214:22-137.184.188.240:35926.service - OpenSSH per-connection server daemon (137.184.188.240:35926). Feb 13 20:22:31.714995 kubelet[1947]: E0213 20:22:31.714923 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:31.893055 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4-shm.mount: Deactivated successfully. Feb 13 20:22:31.893526 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637-shm.mount: Deactivated successfully. Feb 13 20:22:31.972650 kubelet[1947]: I0213 20:22:31.972329 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4" Feb 13 20:22:31.976234 containerd[1523]: time="2025-02-13T20:22:31.975097128Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\"" Feb 13 20:22:31.976234 containerd[1523]: time="2025-02-13T20:22:31.976034082Z" level=info msg="Ensure that sandbox ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4 in task-service has been cleanup successfully" Feb 13 20:22:31.978064 containerd[1523]: time="2025-02-13T20:22:31.978024021Z" level=info msg="TearDown network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" successfully" Feb 13 20:22:31.980186 containerd[1523]: time="2025-02-13T20:22:31.979546304Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" returns successfully" Feb 13 20:22:31.979920 systemd[1]: run-netns-cni\x2db4a96a4a\x2dadcc\x2d3b42\x2d2800\x2d365aa772f224.mount: Deactivated successfully. Feb 13 20:22:31.984137 containerd[1523]: time="2025-02-13T20:22:31.983709633Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" Feb 13 20:22:31.984137 containerd[1523]: time="2025-02-13T20:22:31.983819485Z" level=info msg="TearDown network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" successfully" Feb 13 20:22:31.984137 containerd[1523]: time="2025-02-13T20:22:31.983850866Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" returns successfully" Feb 13 20:22:31.985233 containerd[1523]: time="2025-02-13T20:22:31.984992170Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:22:31.985233 containerd[1523]: time="2025-02-13T20:22:31.985123696Z" level=info msg="TearDown network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" successfully" Feb 13 20:22:31.985233 containerd[1523]: time="2025-02-13T20:22:31.985161032Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" returns successfully" Feb 13 20:22:31.986257 containerd[1523]: time="2025-02-13T20:22:31.986211948Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:22:31.986421 containerd[1523]: time="2025-02-13T20:22:31.986339764Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:22:31.986421 containerd[1523]: time="2025-02-13T20:22:31.986366421Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:22:31.987113 containerd[1523]: time="2025-02-13T20:22:31.986937553Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:22:31.987113 containerd[1523]: time="2025-02-13T20:22:31.987066762Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:22:31.987113 containerd[1523]: time="2025-02-13T20:22:31.987088285Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:22:31.988337 containerd[1523]: time="2025-02-13T20:22:31.987818178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:5,}" Feb 13 20:22:31.999102 kubelet[1947]: I0213 20:22:31.999067 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637" Feb 13 20:22:32.003114 containerd[1523]: time="2025-02-13T20:22:32.003030031Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\"" Feb 13 20:22:32.003423 containerd[1523]: time="2025-02-13T20:22:32.003381318Z" level=info msg="Ensure that sandbox 7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637 in task-service has been cleanup successfully" Feb 13 20:22:32.005531 containerd[1523]: time="2025-02-13T20:22:32.003762788Z" level=info msg="TearDown network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" successfully" Feb 13 20:22:32.005531 containerd[1523]: time="2025-02-13T20:22:32.003790638Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" returns successfully" Feb 13 20:22:32.006813 systemd[1]: run-netns-cni\x2db915b8dd\x2d243f\x2d0dc2\x2d0074\x2d3a70eaf9bece.mount: Deactivated successfully. Feb 13 20:22:32.007414 containerd[1523]: time="2025-02-13T20:22:32.007374808Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" Feb 13 20:22:32.008172 containerd[1523]: time="2025-02-13T20:22:32.007527639Z" level=info msg="TearDown network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" successfully" Feb 13 20:22:32.008172 containerd[1523]: time="2025-02-13T20:22:32.007798036Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" returns successfully" Feb 13 20:22:32.008323 containerd[1523]: time="2025-02-13T20:22:32.008168081Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:22:32.008323 containerd[1523]: time="2025-02-13T20:22:32.008280068Z" level=info msg="TearDown network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" successfully" Feb 13 20:22:32.008323 containerd[1523]: time="2025-02-13T20:22:32.008298919Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" returns successfully" Feb 13 20:22:32.010296 containerd[1523]: time="2025-02-13T20:22:32.008935721Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:22:32.010296 containerd[1523]: time="2025-02-13T20:22:32.009049110Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:22:32.010296 containerd[1523]: time="2025-02-13T20:22:32.009070964Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:22:32.010296 containerd[1523]: time="2025-02-13T20:22:32.009876056Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:22:32.010848 containerd[1523]: time="2025-02-13T20:22:32.010706282Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:22:32.010848 containerd[1523]: time="2025-02-13T20:22:32.010728610Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:22:32.011449 containerd[1523]: time="2025-02-13T20:22:32.011410885Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:32.011648 containerd[1523]: time="2025-02-13T20:22:32.011617066Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:32.011835 containerd[1523]: time="2025-02-13T20:22:32.011649148Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:32.012288 containerd[1523]: time="2025-02-13T20:22:32.012244193Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:32.012384 containerd[1523]: time="2025-02-13T20:22:32.012363442Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:32.012617 containerd[1523]: time="2025-02-13T20:22:32.012383329Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:32.014519 containerd[1523]: time="2025-02-13T20:22:32.013806002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:7,}" Feb 13 20:22:32.243576 containerd[1523]: time="2025-02-13T20:22:32.243338324Z" level=error msg="Failed to destroy network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:32.245209 containerd[1523]: time="2025-02-13T20:22:32.245170250Z" level=error msg="encountered an error cleaning up failed sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:32.245419 containerd[1523]: time="2025-02-13T20:22:32.245265620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:32.246441 kubelet[1947]: E0213 20:22:32.245695 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:32.246441 kubelet[1947]: E0213 20:22:32.245825 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:32.246441 kubelet[1947]: E0213 20:22:32.245876 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:32.246678 kubelet[1947]: E0213 20:22:32.245961 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rq72x" podUID="aeec0dc4-c55c-4364-849e-c8dfcadc1aa1" Feb 13 20:22:32.262353 containerd[1523]: time="2025-02-13T20:22:32.262254238Z" level=error msg="Failed to destroy network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:32.263112 containerd[1523]: time="2025-02-13T20:22:32.262716240Z" level=error msg="encountered an error cleaning up failed sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:32.263112 containerd[1523]: time="2025-02-13T20:22:32.262789444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:32.263277 kubelet[1947]: E0213 20:22:32.263173 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:32.263277 kubelet[1947]: E0213 20:22:32.263247 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:32.263373 kubelet[1947]: E0213 20:22:32.263278 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:32.263373 kubelet[1947]: E0213 20:22:32.263328 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:32.313787 sshd[2895]: Invalid user trung from 137.184.188.240 port 35926 Feb 13 20:22:32.468602 sshd[2895]: Received disconnect from 137.184.188.240 port 35926:11: Bye Bye [preauth] Feb 13 20:22:32.468602 sshd[2895]: Disconnected from invalid user trung 137.184.188.240 port 35926 [preauth] Feb 13 20:22:32.470896 systemd[1]: sshd@10-10.230.12.214:22-137.184.188.240:35926.service: Deactivated successfully. Feb 13 20:22:32.715682 kubelet[1947]: E0213 20:22:32.715609 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:32.893392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef-shm.mount: Deactivated successfully. Feb 13 20:22:33.006395 kubelet[1947]: I0213 20:22:33.006347 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef" Feb 13 20:22:33.008754 containerd[1523]: time="2025-02-13T20:22:33.008708859Z" level=info msg="StopPodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\"" Feb 13 20:22:33.012747 containerd[1523]: time="2025-02-13T20:22:33.009842851Z" level=info msg="Ensure that sandbox eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef in task-service has been cleanup successfully" Feb 13 20:22:33.012747 containerd[1523]: time="2025-02-13T20:22:33.012594021Z" level=info msg="TearDown network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" successfully" Feb 13 20:22:33.012747 containerd[1523]: time="2025-02-13T20:22:33.012618469Z" level=info msg="StopPodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" returns successfully" Feb 13 20:22:33.013392 containerd[1523]: time="2025-02-13T20:22:33.013085247Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\"" Feb 13 20:22:33.013392 containerd[1523]: time="2025-02-13T20:22:33.013192511Z" level=info msg="TearDown network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" successfully" Feb 13 20:22:33.013392 containerd[1523]: time="2025-02-13T20:22:33.013211076Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" returns successfully" Feb 13 20:22:33.013735 systemd[1]: run-netns-cni\x2de26fff78\x2deda4\x2d1559\x2d7c36\x2d643d3c4dbfbb.mount: Deactivated successfully. Feb 13 20:22:33.014627 containerd[1523]: time="2025-02-13T20:22:33.013732316Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" Feb 13 20:22:33.014627 containerd[1523]: time="2025-02-13T20:22:33.014384249Z" level=info msg="TearDown network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" successfully" Feb 13 20:22:33.014627 containerd[1523]: time="2025-02-13T20:22:33.014404411Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" returns successfully" Feb 13 20:22:33.015564 containerd[1523]: time="2025-02-13T20:22:33.015523238Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:22:33.015992 containerd[1523]: time="2025-02-13T20:22:33.015937659Z" level=info msg="TearDown network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" successfully" Feb 13 20:22:33.016436 containerd[1523]: time="2025-02-13T20:22:33.015967123Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" returns successfully" Feb 13 20:22:33.017074 containerd[1523]: time="2025-02-13T20:22:33.016847805Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:22:33.017074 containerd[1523]: time="2025-02-13T20:22:33.016954572Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:22:33.017074 containerd[1523]: time="2025-02-13T20:22:33.016993226Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:22:33.017672 containerd[1523]: time="2025-02-13T20:22:33.017643601Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:22:33.018075 containerd[1523]: time="2025-02-13T20:22:33.018046557Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:22:33.018201 containerd[1523]: time="2025-02-13T20:22:33.018170165Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:22:33.019177 containerd[1523]: time="2025-02-13T20:22:33.019122840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:6,}" Feb 13 20:22:33.020095 kubelet[1947]: I0213 20:22:33.019330 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f" Feb 13 20:22:33.026869 containerd[1523]: time="2025-02-13T20:22:33.026810200Z" level=info msg="StopPodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\"" Feb 13 20:22:33.027133 containerd[1523]: time="2025-02-13T20:22:33.027097231Z" level=info msg="Ensure that sandbox f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f in task-service has been cleanup successfully" Feb 13 20:22:33.029102 containerd[1523]: time="2025-02-13T20:22:33.029033808Z" level=info msg="TearDown network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" successfully" Feb 13 20:22:33.029102 containerd[1523]: time="2025-02-13T20:22:33.029064208Z" level=info msg="StopPodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" returns successfully" Feb 13 20:22:33.030362 systemd[1]: run-netns-cni\x2d468ba4cc\x2d56d8\x2d77d9\x2d710d\x2de5a4d16cb28a.mount: Deactivated successfully. Feb 13 20:22:33.047916 containerd[1523]: time="2025-02-13T20:22:33.047731943Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\"" Feb 13 20:22:33.047916 containerd[1523]: time="2025-02-13T20:22:33.047872985Z" level=info msg="TearDown network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" successfully" Feb 13 20:22:33.047916 containerd[1523]: time="2025-02-13T20:22:33.047893742Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" returns successfully" Feb 13 20:22:33.049292 containerd[1523]: time="2025-02-13T20:22:33.049109703Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" Feb 13 20:22:33.055987 containerd[1523]: time="2025-02-13T20:22:33.054612523Z" level=info msg="TearDown network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" successfully" Feb 13 20:22:33.055987 containerd[1523]: time="2025-02-13T20:22:33.054643401Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" returns successfully" Feb 13 20:22:33.056611 containerd[1523]: time="2025-02-13T20:22:33.056580671Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:22:33.056816 containerd[1523]: time="2025-02-13T20:22:33.056789139Z" level=info msg="TearDown network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" successfully" Feb 13 20:22:33.056940 containerd[1523]: time="2025-02-13T20:22:33.056917158Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" returns successfully" Feb 13 20:22:33.057778 containerd[1523]: time="2025-02-13T20:22:33.057748680Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:22:33.058118 containerd[1523]: time="2025-02-13T20:22:33.058092424Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:22:33.058304 containerd[1523]: time="2025-02-13T20:22:33.058238239Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:22:33.059131 containerd[1523]: time="2025-02-13T20:22:33.059103595Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:22:33.059373 containerd[1523]: time="2025-02-13T20:22:33.059339450Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:22:33.059528 containerd[1523]: time="2025-02-13T20:22:33.059476240Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:22:33.061142 containerd[1523]: time="2025-02-13T20:22:33.061114718Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:33.061621 containerd[1523]: time="2025-02-13T20:22:33.061594017Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:33.061756 containerd[1523]: time="2025-02-13T20:22:33.061729712Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:33.062986 containerd[1523]: time="2025-02-13T20:22:33.062949516Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:33.063193 containerd[1523]: time="2025-02-13T20:22:33.063166913Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:33.063299 containerd[1523]: time="2025-02-13T20:22:33.063275036Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:33.064079 containerd[1523]: time="2025-02-13T20:22:33.064048293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:8,}" Feb 13 20:22:33.226256 containerd[1523]: time="2025-02-13T20:22:33.225938765Z" level=error msg="Failed to destroy network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:33.227768 containerd[1523]: time="2025-02-13T20:22:33.227512759Z" level=error msg="encountered an error cleaning up failed sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:33.227768 containerd[1523]: time="2025-02-13T20:22:33.227619031Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:33.228088 kubelet[1947]: E0213 20:22:33.227926 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:33.228088 kubelet[1947]: E0213 20:22:33.228006 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:33.228088 kubelet[1947]: E0213 20:22:33.228040 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:33.228286 kubelet[1947]: E0213 20:22:33.228108 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:33.229908 containerd[1523]: time="2025-02-13T20:22:33.229611775Z" level=error msg="Failed to destroy network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:33.230188 containerd[1523]: time="2025-02-13T20:22:33.230043324Z" level=error msg="encountered an error cleaning up failed sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:33.230188 containerd[1523]: time="2025-02-13T20:22:33.230135645Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:33.230582 kubelet[1947]: E0213 20:22:33.230353 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:33.230582 kubelet[1947]: E0213 20:22:33.230466 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:33.230582 kubelet[1947]: E0213 20:22:33.230559 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:33.230949 kubelet[1947]: E0213 20:22:33.230631 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rq72x" podUID="aeec0dc4-c55c-4364-849e-c8dfcadc1aa1" Feb 13 20:22:33.716404 kubelet[1947]: E0213 20:22:33.716322 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:33.892568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4-shm.mount: Deactivated successfully. Feb 13 20:22:34.029513 kubelet[1947]: I0213 20:22:34.029384 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06" Feb 13 20:22:34.030619 containerd[1523]: time="2025-02-13T20:22:34.030567631Z" level=info msg="StopPodSandbox for \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\"" Feb 13 20:22:34.031264 containerd[1523]: time="2025-02-13T20:22:34.031228706Z" level=info msg="Ensure that sandbox 558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06 in task-service has been cleanup successfully" Feb 13 20:22:34.033635 containerd[1523]: time="2025-02-13T20:22:34.033582921Z" level=info msg="TearDown network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\" successfully" Feb 13 20:22:34.034011 containerd[1523]: time="2025-02-13T20:22:34.033740644Z" level=info msg="StopPodSandbox for \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\" returns successfully" Feb 13 20:22:34.034457 systemd[1]: run-netns-cni\x2d8fcbb30f\x2d1a57\x2d9498\x2dea4a\x2d2d7390bf8728.mount: Deactivated successfully. Feb 13 20:22:34.036297 containerd[1523]: time="2025-02-13T20:22:34.035752050Z" level=info msg="StopPodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\"" Feb 13 20:22:34.036297 containerd[1523]: time="2025-02-13T20:22:34.035862684Z" level=info msg="TearDown network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" successfully" Feb 13 20:22:34.036297 containerd[1523]: time="2025-02-13T20:22:34.035882485Z" level=info msg="StopPodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" returns successfully" Feb 13 20:22:34.038461 containerd[1523]: time="2025-02-13T20:22:34.038236852Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\"" Feb 13 20:22:34.038461 containerd[1523]: time="2025-02-13T20:22:34.038368949Z" level=info msg="TearDown network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" successfully" Feb 13 20:22:34.038461 containerd[1523]: time="2025-02-13T20:22:34.038388663Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" returns successfully" Feb 13 20:22:34.040037 containerd[1523]: time="2025-02-13T20:22:34.039757819Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" Feb 13 20:22:34.040037 containerd[1523]: time="2025-02-13T20:22:34.039870333Z" level=info msg="TearDown network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" successfully" Feb 13 20:22:34.040037 containerd[1523]: time="2025-02-13T20:22:34.039925517Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" returns successfully" Feb 13 20:22:34.041972 containerd[1523]: time="2025-02-13T20:22:34.041939924Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:22:34.042072 containerd[1523]: time="2025-02-13T20:22:34.042044085Z" level=info msg="TearDown network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" successfully" Feb 13 20:22:34.042072 containerd[1523]: time="2025-02-13T20:22:34.042068625Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" returns successfully" Feb 13 20:22:34.042572 containerd[1523]: time="2025-02-13T20:22:34.042531299Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:22:34.042708 containerd[1523]: time="2025-02-13T20:22:34.042681942Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:22:34.042758 containerd[1523]: time="2025-02-13T20:22:34.042707908Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:22:34.043703 containerd[1523]: time="2025-02-13T20:22:34.043670332Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:22:34.043879 containerd[1523]: time="2025-02-13T20:22:34.043774072Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:22:34.043879 containerd[1523]: time="2025-02-13T20:22:34.043810393Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:22:34.044788 kubelet[1947]: I0213 20:22:34.044060 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4" Feb 13 20:22:34.044850 containerd[1523]: time="2025-02-13T20:22:34.044831845Z" level=info msg="StopPodSandbox for \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\"" Feb 13 20:22:34.045239 containerd[1523]: time="2025-02-13T20:22:34.044976768Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:34.045239 containerd[1523]: time="2025-02-13T20:22:34.045093571Z" level=info msg="Ensure that sandbox c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4 in task-service has been cleanup successfully" Feb 13 20:22:34.045239 containerd[1523]: time="2025-02-13T20:22:34.045111605Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:34.045239 containerd[1523]: time="2025-02-13T20:22:34.045163602Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:34.045584 containerd[1523]: time="2025-02-13T20:22:34.045463194Z" level=info msg="TearDown network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\" successfully" Feb 13 20:22:34.045655 containerd[1523]: time="2025-02-13T20:22:34.045586743Z" level=info msg="StopPodSandbox for \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\" returns successfully" Feb 13 20:22:34.048493 systemd[1]: run-netns-cni\x2d24a3fe09\x2d6c62\x2dab8a\x2dce97\x2d8d9b625fdfbb.mount: Deactivated successfully. Feb 13 20:22:34.048984 containerd[1523]: time="2025-02-13T20:22:34.048697761Z" level=info msg="StopPodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\"" Feb 13 20:22:34.048984 containerd[1523]: time="2025-02-13T20:22:34.048799175Z" level=info msg="TearDown network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" successfully" Feb 13 20:22:34.048984 containerd[1523]: time="2025-02-13T20:22:34.048818655Z" level=info msg="StopPodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" returns successfully" Feb 13 20:22:34.048984 containerd[1523]: time="2025-02-13T20:22:34.048891935Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:34.049168 containerd[1523]: time="2025-02-13T20:22:34.048983494Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:34.049168 containerd[1523]: time="2025-02-13T20:22:34.049001056Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:34.051957 containerd[1523]: time="2025-02-13T20:22:34.050018853Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:9,}" Feb 13 20:22:34.051957 containerd[1523]: time="2025-02-13T20:22:34.050456952Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\"" Feb 13 20:22:34.051957 containerd[1523]: time="2025-02-13T20:22:34.050609960Z" level=info msg="TearDown network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" successfully" Feb 13 20:22:34.051957 containerd[1523]: time="2025-02-13T20:22:34.050629317Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" returns successfully" Feb 13 20:22:34.051957 containerd[1523]: time="2025-02-13T20:22:34.051891822Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" Feb 13 20:22:34.052184 containerd[1523]: time="2025-02-13T20:22:34.052010830Z" level=info msg="TearDown network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" successfully" Feb 13 20:22:34.052184 containerd[1523]: time="2025-02-13T20:22:34.052029190Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" returns successfully" Feb 13 20:22:34.052980 containerd[1523]: time="2025-02-13T20:22:34.052755333Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:22:34.052980 containerd[1523]: time="2025-02-13T20:22:34.052918277Z" level=info msg="TearDown network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" successfully" Feb 13 20:22:34.052980 containerd[1523]: time="2025-02-13T20:22:34.052937404Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" returns successfully" Feb 13 20:22:34.057009 containerd[1523]: time="2025-02-13T20:22:34.056824421Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:22:34.057009 containerd[1523]: time="2025-02-13T20:22:34.056980544Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:22:34.057009 containerd[1523]: time="2025-02-13T20:22:34.056999489Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:22:34.059398 containerd[1523]: time="2025-02-13T20:22:34.059148142Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:22:34.059398 containerd[1523]: time="2025-02-13T20:22:34.059303989Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:22:34.059398 containerd[1523]: time="2025-02-13T20:22:34.059322970Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:22:34.062710 containerd[1523]: time="2025-02-13T20:22:34.062678996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:7,}" Feb 13 20:22:34.244178 containerd[1523]: time="2025-02-13T20:22:34.244097219Z" level=error msg="Failed to destroy network for sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:34.245533 containerd[1523]: time="2025-02-13T20:22:34.244611675Z" level=error msg="encountered an error cleaning up failed sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:34.245533 containerd[1523]: time="2025-02-13T20:22:34.244691198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:7,} failed, error" error="failed to setup network for sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:34.246460 kubelet[1947]: E0213 20:22:34.244968 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:34.246460 kubelet[1947]: E0213 20:22:34.245039 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:34.246460 kubelet[1947]: E0213 20:22:34.245072 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-rq72x" Feb 13 20:22:34.246757 kubelet[1947]: E0213 20:22:34.245139 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-rq72x_default(aeec0dc4-c55c-4364-849e-c8dfcadc1aa1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-rq72x" podUID="aeec0dc4-c55c-4364-849e-c8dfcadc1aa1" Feb 13 20:22:34.247653 containerd[1523]: time="2025-02-13T20:22:34.247612099Z" level=error msg="Failed to destroy network for sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:34.248013 containerd[1523]: time="2025-02-13T20:22:34.247966906Z" level=error msg="encountered an error cleaning up failed sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:34.248087 containerd[1523]: time="2025-02-13T20:22:34.248029686Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:9,} failed, error" error="failed to setup network for sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:34.248374 kubelet[1947]: E0213 20:22:34.248306 1947 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:22:34.248652 kubelet[1947]: E0213 20:22:34.248352 1947 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:34.248652 kubelet[1947]: E0213 20:22:34.248583 1947 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-spcjx" Feb 13 20:22:34.248910 kubelet[1947]: E0213 20:22:34.248839 1947 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-spcjx_calico-system(7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-spcjx" podUID="7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618" Feb 13 20:22:34.312316 containerd[1523]: time="2025-02-13T20:22:34.311281180Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:34.313593 containerd[1523]: time="2025-02-13T20:22:34.313554025Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:22:34.315554 containerd[1523]: time="2025-02-13T20:22:34.314468778Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:34.317292 containerd[1523]: time="2025-02-13T20:22:34.317232276Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:34.318778 containerd[1523]: time="2025-02-13T20:22:34.318306856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 10.443475266s" Feb 13 20:22:34.318778 containerd[1523]: time="2025-02-13T20:22:34.318354116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:22:34.332214 containerd[1523]: time="2025-02-13T20:22:34.332063241Z" level=info msg="CreateContainer within sandbox \"4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:22:34.346631 containerd[1523]: time="2025-02-13T20:22:34.346574093Z" level=info msg="CreateContainer within sandbox \"4e9e09e2d1bb36c2951578b118e480327dba04814e71fbdb5b5dec7bf6e9a0a2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"695c5ff4d18f8402de9063ce0da7f317c972b2268869c67f8e046e916e8df37d\"" Feb 13 20:22:34.347573 containerd[1523]: time="2025-02-13T20:22:34.347207048Z" level=info msg="StartContainer for \"695c5ff4d18f8402de9063ce0da7f317c972b2268869c67f8e046e916e8df37d\"" Feb 13 20:22:34.439812 systemd[1]: Started cri-containerd-695c5ff4d18f8402de9063ce0da7f317c972b2268869c67f8e046e916e8df37d.scope - libcontainer container 695c5ff4d18f8402de9063ce0da7f317c972b2268869c67f8e046e916e8df37d. Feb 13 20:22:34.488429 containerd[1523]: time="2025-02-13T20:22:34.488373139Z" level=info msg="StartContainer for \"695c5ff4d18f8402de9063ce0da7f317c972b2268869c67f8e046e916e8df37d\" returns successfully" Feb 13 20:22:34.574667 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:22:34.574821 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:22:34.717150 kubelet[1947]: E0213 20:22:34.717065 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:34.896067 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4-shm.mount: Deactivated successfully. Feb 13 20:22:34.896232 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848974864.mount: Deactivated successfully. Feb 13 20:22:35.062125 kubelet[1947]: I0213 20:22:35.061300 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4" Feb 13 20:22:35.062764 containerd[1523]: time="2025-02-13T20:22:35.062576649Z" level=info msg="StopPodSandbox for \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\"" Feb 13 20:22:35.063138 containerd[1523]: time="2025-02-13T20:22:35.062829021Z" level=info msg="Ensure that sandbox e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4 in task-service has been cleanup successfully" Feb 13 20:22:35.067061 containerd[1523]: time="2025-02-13T20:22:35.065917690Z" level=info msg="TearDown network for sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\" successfully" Feb 13 20:22:35.067061 containerd[1523]: time="2025-02-13T20:22:35.066061008Z" level=info msg="StopPodSandbox for \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\" returns successfully" Feb 13 20:22:35.067061 containerd[1523]: time="2025-02-13T20:22:35.066892600Z" level=info msg="StopPodSandbox for \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\"" Feb 13 20:22:35.067061 containerd[1523]: time="2025-02-13T20:22:35.067003717Z" level=info msg="TearDown network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\" successfully" Feb 13 20:22:35.067621 containerd[1523]: time="2025-02-13T20:22:35.067023074Z" level=info msg="StopPodSandbox for \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\" returns successfully" Feb 13 20:22:35.067974 systemd[1]: run-netns-cni\x2d1d22d239\x2d99de\x2df50d\x2d2423\x2de55efe74d152.mount: Deactivated successfully. Feb 13 20:22:35.070731 containerd[1523]: time="2025-02-13T20:22:35.070699906Z" level=info msg="StopPodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\"" Feb 13 20:22:35.070841 containerd[1523]: time="2025-02-13T20:22:35.070814794Z" level=info msg="TearDown network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" successfully" Feb 13 20:22:35.070922 containerd[1523]: time="2025-02-13T20:22:35.070841466Z" level=info msg="StopPodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" returns successfully" Feb 13 20:22:35.071255 containerd[1523]: time="2025-02-13T20:22:35.071216589Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\"" Feb 13 20:22:35.071467 containerd[1523]: time="2025-02-13T20:22:35.071356928Z" level=info msg="TearDown network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" successfully" Feb 13 20:22:35.071467 containerd[1523]: time="2025-02-13T20:22:35.071414263Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" returns successfully" Feb 13 20:22:35.073437 containerd[1523]: time="2025-02-13T20:22:35.073408376Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" Feb 13 20:22:35.075240 kubelet[1947]: I0213 20:22:35.074025 1947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-gsq5d" podStartSLOduration=4.077816627 podStartE2EDuration="28.073982474s" podCreationTimestamp="2025-02-13 20:22:07 +0000 UTC" firstStartedPulling="2025-02-13 20:22:10.323597044 +0000 UTC m=+3.504410677" lastFinishedPulling="2025-02-13 20:22:34.319762844 +0000 UTC m=+27.500576524" observedRunningTime="2025-02-13 20:22:35.072182195 +0000 UTC m=+28.252995840" watchObservedRunningTime="2025-02-13 20:22:35.073982474 +0000 UTC m=+28.254796101" Feb 13 20:22:35.075848 containerd[1523]: time="2025-02-13T20:22:35.075569109Z" level=info msg="TearDown network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" successfully" Feb 13 20:22:35.076001 containerd[1523]: time="2025-02-13T20:22:35.075973120Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" returns successfully" Feb 13 20:22:35.077769 containerd[1523]: time="2025-02-13T20:22:35.077739808Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:22:35.078116 containerd[1523]: time="2025-02-13T20:22:35.078087672Z" level=info msg="TearDown network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" successfully" Feb 13 20:22:35.078323 containerd[1523]: time="2025-02-13T20:22:35.078296923Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" returns successfully" Feb 13 20:22:35.078798 containerd[1523]: time="2025-02-13T20:22:35.078766961Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:22:35.079632 containerd[1523]: time="2025-02-13T20:22:35.079604586Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:22:35.079960 containerd[1523]: time="2025-02-13T20:22:35.079897565Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:22:35.081422 kubelet[1947]: I0213 20:22:35.080588 1947 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4" Feb 13 20:22:35.081693 containerd[1523]: time="2025-02-13T20:22:35.081663366Z" level=info msg="StopPodSandbox for \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\"" Feb 13 20:22:35.082185 containerd[1523]: time="2025-02-13T20:22:35.081905898Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:22:35.082387 containerd[1523]: time="2025-02-13T20:22:35.082360557Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:22:35.082687 containerd[1523]: time="2025-02-13T20:22:35.082661079Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:22:35.083086 containerd[1523]: time="2025-02-13T20:22:35.082968991Z" level=info msg="Ensure that sandbox 10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4 in task-service has been cleanup successfully" Feb 13 20:22:35.085648 containerd[1523]: time="2025-02-13T20:22:35.085585983Z" level=info msg="TearDown network for sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\" successfully" Feb 13 20:22:35.085840 containerd[1523]: time="2025-02-13T20:22:35.085778679Z" level=info msg="StopPodSandbox for \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\" returns successfully" Feb 13 20:22:35.088055 containerd[1523]: time="2025-02-13T20:22:35.087855036Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:22:35.088055 containerd[1523]: time="2025-02-13T20:22:35.087967542Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:22:35.088055 containerd[1523]: time="2025-02-13T20:22:35.087987147Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:22:35.089172 containerd[1523]: time="2025-02-13T20:22:35.089049600Z" level=info msg="StopPodSandbox for \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\"" Feb 13 20:22:35.089560 containerd[1523]: time="2025-02-13T20:22:35.089462702Z" level=info msg="TearDown network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\" successfully" Feb 13 20:22:35.091997 containerd[1523]: time="2025-02-13T20:22:35.091620985Z" level=info msg="StopPodSandbox for \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\" returns successfully" Feb 13 20:22:35.091997 containerd[1523]: time="2025-02-13T20:22:35.091594552Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:22:35.091997 containerd[1523]: time="2025-02-13T20:22:35.091951312Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:22:35.091480 systemd[1]: run-netns-cni\x2d33655899\x2de056\x2de717\x2d0aed\x2d0dd22161d227.mount: Deactivated successfully. Feb 13 20:22:35.092437 containerd[1523]: time="2025-02-13T20:22:35.091972178Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:22:35.093986 containerd[1523]: time="2025-02-13T20:22:35.093957176Z" level=info msg="StopPodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\"" Feb 13 20:22:35.094563 containerd[1523]: time="2025-02-13T20:22:35.094203413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:10,}" Feb 13 20:22:35.094940 containerd[1523]: time="2025-02-13T20:22:35.094903702Z" level=info msg="TearDown network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" successfully" Feb 13 20:22:35.094940 containerd[1523]: time="2025-02-13T20:22:35.094932386Z" level=info msg="StopPodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" returns successfully" Feb 13 20:22:35.095331 containerd[1523]: time="2025-02-13T20:22:35.095300192Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\"" Feb 13 20:22:35.095447 containerd[1523]: time="2025-02-13T20:22:35.095414023Z" level=info msg="TearDown network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" successfully" Feb 13 20:22:35.095573 containerd[1523]: time="2025-02-13T20:22:35.095447856Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" returns successfully" Feb 13 20:22:35.096042 containerd[1523]: time="2025-02-13T20:22:35.096012302Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" Feb 13 20:22:35.096142 containerd[1523]: time="2025-02-13T20:22:35.096112936Z" level=info msg="TearDown network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" successfully" Feb 13 20:22:35.096142 containerd[1523]: time="2025-02-13T20:22:35.096134008Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" returns successfully" Feb 13 20:22:35.097604 containerd[1523]: time="2025-02-13T20:22:35.097567228Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:22:35.098725 containerd[1523]: time="2025-02-13T20:22:35.097684649Z" level=info msg="TearDown network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" successfully" Feb 13 20:22:35.098725 containerd[1523]: time="2025-02-13T20:22:35.097712044Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" returns successfully" Feb 13 20:22:35.100745 containerd[1523]: time="2025-02-13T20:22:35.100713749Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:22:35.101112 containerd[1523]: time="2025-02-13T20:22:35.101077248Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:22:35.101604 containerd[1523]: time="2025-02-13T20:22:35.101567761Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:22:35.102673 containerd[1523]: time="2025-02-13T20:22:35.102526661Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:22:35.103072 containerd[1523]: time="2025-02-13T20:22:35.103045799Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:22:35.104270 containerd[1523]: time="2025-02-13T20:22:35.104231481Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:22:35.105341 containerd[1523]: time="2025-02-13T20:22:35.105214246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:8,}" Feb 13 20:22:35.445349 systemd-networkd[1449]: cali1b8b5a2919f: Link UP Feb 13 20:22:35.449645 systemd-networkd[1449]: cali1b8b5a2919f: Gained carrier Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.202 [INFO][3167] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.260 [INFO][3167] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.12.214-k8s-csi--node--driver--spcjx-eth0 csi-node-driver- calico-system 7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618 1280 0 2025-02-13 20:22:07 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.12.214 csi-node-driver-spcjx eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali1b8b5a2919f [] []}} ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Namespace="calico-system" Pod="csi-node-driver-spcjx" WorkloadEndpoint="10.230.12.214-k8s-csi--node--driver--spcjx-" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.260 [INFO][3167] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Namespace="calico-system" Pod="csi-node-driver-spcjx" WorkloadEndpoint="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.367 [INFO][3192] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" HandleID="k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Workload="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.386 [INFO][3192] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" HandleID="k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Workload="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030c780), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.12.214", "pod":"csi-node-driver-spcjx", "timestamp":"2025-02-13 20:22:35.367134393 +0000 UTC"}, Hostname:"10.230.12.214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.386 [INFO][3192] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.387 [INFO][3192] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.387 [INFO][3192] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.12.214' Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.390 [INFO][3192] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.397 [INFO][3192] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.403 [INFO][3192] ipam/ipam.go 489: Trying affinity for 192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.406 [INFO][3192] ipam/ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.410 [INFO][3192] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.410 [INFO][3192] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.412 [INFO][3192] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.418 [INFO][3192] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.426 [INFO][3192] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.113.129/26] block=192.168.113.128/26 handle="k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.426 [INFO][3192] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.129/26] handle="k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" host="10.230.12.214" Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.426 [INFO][3192] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:22:35.467130 containerd[1523]: 2025-02-13 20:22:35.426 [INFO][3192] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.129/26] IPv6=[] ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" HandleID="k8s-pod-network.2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Workload="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" Feb 13 20:22:35.468874 containerd[1523]: 2025-02-13 20:22:35.430 [INFO][3167] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Namespace="calico-system" Pod="csi-node-driver-spcjx" WorkloadEndpoint="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.12.214-k8s-csi--node--driver--spcjx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618", ResourceVersion:"1280", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 22, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.12.214", ContainerID:"", Pod:"csi-node-driver-spcjx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b8b5a2919f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:22:35.468874 containerd[1523]: 2025-02-13 20:22:35.430 [INFO][3167] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.113.129/32] ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Namespace="calico-system" Pod="csi-node-driver-spcjx" WorkloadEndpoint="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" Feb 13 20:22:35.468874 containerd[1523]: 2025-02-13 20:22:35.431 [INFO][3167] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1b8b5a2919f ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Namespace="calico-system" Pod="csi-node-driver-spcjx" WorkloadEndpoint="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" Feb 13 20:22:35.468874 containerd[1523]: 2025-02-13 20:22:35.446 [INFO][3167] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Namespace="calico-system" Pod="csi-node-driver-spcjx" WorkloadEndpoint="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" Feb 13 20:22:35.468874 containerd[1523]: 2025-02-13 20:22:35.446 [INFO][3167] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Namespace="calico-system" Pod="csi-node-driver-spcjx" WorkloadEndpoint="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.12.214-k8s-csi--node--driver--spcjx-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618", ResourceVersion:"1280", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 22, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.12.214", ContainerID:"2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c", Pod:"csi-node-driver-spcjx", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.113.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali1b8b5a2919f", MAC:"d2:b3:5c:91:e0:0f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:22:35.468874 containerd[1523]: 2025-02-13 20:22:35.464 [INFO][3167] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c" Namespace="calico-system" Pod="csi-node-driver-spcjx" WorkloadEndpoint="10.230.12.214-k8s-csi--node--driver--spcjx-eth0" Feb 13 20:22:35.508131 containerd[1523]: time="2025-02-13T20:22:35.507924524Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:35.508131 containerd[1523]: time="2025-02-13T20:22:35.508071989Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:35.508859 containerd[1523]: time="2025-02-13T20:22:35.508101231Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:35.509516 containerd[1523]: time="2025-02-13T20:22:35.509420724Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:35.536718 systemd[1]: Started cri-containerd-2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c.scope - libcontainer container 2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c. Feb 13 20:22:35.544300 systemd-networkd[1449]: cali9b06b599da0: Link UP Feb 13 20:22:35.549171 systemd-networkd[1449]: cali9b06b599da0: Gained carrier Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.211 [INFO][3176] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.259 [INFO][3176] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0 nginx-deployment-7fcdb87857- default aeec0dc4-c55c-4364-849e-c8dfcadc1aa1 1420 0 2025-02-13 20:22:27 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.12.214 nginx-deployment-7fcdb87857-rq72x eth0 default [] [] [kns.default ksa.default.default] cali9b06b599da0 [] []}} ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Namespace="default" Pod="nginx-deployment-7fcdb87857-rq72x" WorkloadEndpoint="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.260 [INFO][3176] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Namespace="default" Pod="nginx-deployment-7fcdb87857-rq72x" WorkloadEndpoint="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.366 [INFO][3191] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" HandleID="k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Workload="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.387 [INFO][3191] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" HandleID="k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Workload="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000428d90), Attrs:map[string]string{"namespace":"default", "node":"10.230.12.214", "pod":"nginx-deployment-7fcdb87857-rq72x", "timestamp":"2025-02-13 20:22:35.366206761 +0000 UTC"}, Hostname:"10.230.12.214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.387 [INFO][3191] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.427 [INFO][3191] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.427 [INFO][3191] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.12.214' Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.492 [INFO][3191] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.498 [INFO][3191] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.509 [INFO][3191] ipam/ipam.go 489: Trying affinity for 192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.514 [INFO][3191] ipam/ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.518 [INFO][3191] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.518 [INFO][3191] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.520 [INFO][3191] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2 Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.525 [INFO][3191] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.535 [INFO][3191] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.113.130/26] block=192.168.113.128/26 handle="k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.535 [INFO][3191] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.130/26] handle="k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" host="10.230.12.214" Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.535 [INFO][3191] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:22:35.561770 containerd[1523]: 2025-02-13 20:22:35.535 [INFO][3191] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.130/26] IPv6=[] ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" HandleID="k8s-pod-network.a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Workload="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" Feb 13 20:22:35.562919 containerd[1523]: 2025-02-13 20:22:35.539 [INFO][3176] cni-plugin/k8s.go 386: Populated endpoint ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Namespace="default" Pod="nginx-deployment-7fcdb87857-rq72x" WorkloadEndpoint="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"aeec0dc4-c55c-4364-849e-c8dfcadc1aa1", ResourceVersion:"1420", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.12.214", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-rq72x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9b06b599da0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:22:35.562919 containerd[1523]: 2025-02-13 20:22:35.539 [INFO][3176] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.113.130/32] ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Namespace="default" Pod="nginx-deployment-7fcdb87857-rq72x" WorkloadEndpoint="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" Feb 13 20:22:35.562919 containerd[1523]: 2025-02-13 20:22:35.540 [INFO][3176] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9b06b599da0 ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Namespace="default" Pod="nginx-deployment-7fcdb87857-rq72x" WorkloadEndpoint="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" Feb 13 20:22:35.562919 containerd[1523]: 2025-02-13 20:22:35.546 [INFO][3176] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Namespace="default" Pod="nginx-deployment-7fcdb87857-rq72x" WorkloadEndpoint="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" Feb 13 20:22:35.562919 containerd[1523]: 2025-02-13 20:22:35.546 [INFO][3176] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Namespace="default" Pod="nginx-deployment-7fcdb87857-rq72x" WorkloadEndpoint="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"aeec0dc4-c55c-4364-849e-c8dfcadc1aa1", ResourceVersion:"1420", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 22, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.12.214", ContainerID:"a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2", Pod:"nginx-deployment-7fcdb87857-rq72x", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali9b06b599da0", MAC:"e2:03:f0:09:3b:5a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:22:35.562919 containerd[1523]: 2025-02-13 20:22:35.555 [INFO][3176] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2" Namespace="default" Pod="nginx-deployment-7fcdb87857-rq72x" WorkloadEndpoint="10.230.12.214-k8s-nginx--deployment--7fcdb87857--rq72x-eth0" Feb 13 20:22:35.596911 containerd[1523]: time="2025-02-13T20:22:35.596578965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-spcjx,Uid:7bcc9fed-b4b7-4b0f-9dc0-2e0587f01618,Namespace:calico-system,Attempt:10,} returns sandbox id \"2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c\"" Feb 13 20:22:35.599404 containerd[1523]: time="2025-02-13T20:22:35.599373970Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:22:35.605830 containerd[1523]: time="2025-02-13T20:22:35.604741886Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:35.605830 containerd[1523]: time="2025-02-13T20:22:35.605424113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:35.605830 containerd[1523]: time="2025-02-13T20:22:35.605454117Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:35.605830 containerd[1523]: time="2025-02-13T20:22:35.605606872Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:35.629722 systemd[1]: Started cri-containerd-a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2.scope - libcontainer container a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2. Feb 13 20:22:35.688069 containerd[1523]: time="2025-02-13T20:22:35.688012121Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-rq72x,Uid:aeec0dc4-c55c-4364-849e-c8dfcadc1aa1,Namespace:default,Attempt:8,} returns sandbox id \"a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2\"" Feb 13 20:22:35.718005 kubelet[1947]: E0213 20:22:35.717860 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:36.346518 kernel: bpftool[3445]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:22:36.643854 systemd-networkd[1449]: vxlan.calico: Link UP Feb 13 20:22:36.643870 systemd-networkd[1449]: vxlan.calico: Gained carrier Feb 13 20:22:36.682645 systemd-networkd[1449]: cali9b06b599da0: Gained IPv6LL Feb 13 20:22:36.719500 kubelet[1947]: E0213 20:22:36.718588 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:37.132273 systemd-networkd[1449]: cali1b8b5a2919f: Gained IPv6LL Feb 13 20:22:37.345219 containerd[1523]: time="2025-02-13T20:22:37.345164123Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:37.347681 containerd[1523]: time="2025-02-13T20:22:37.347021003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:22:37.363673 containerd[1523]: time="2025-02-13T20:22:37.363619671Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:37.367591 containerd[1523]: time="2025-02-13T20:22:37.367522739Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:37.368782 containerd[1523]: time="2025-02-13T20:22:37.368630512Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.76904258s" Feb 13 20:22:37.368782 containerd[1523]: time="2025-02-13T20:22:37.368672124Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:22:37.370831 containerd[1523]: time="2025-02-13T20:22:37.370580998Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:22:37.372939 containerd[1523]: time="2025-02-13T20:22:37.372899479Z" level=info msg="CreateContainer within sandbox \"2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:22:37.399438 containerd[1523]: time="2025-02-13T20:22:37.398682398Z" level=info msg="CreateContainer within sandbox \"2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"b9c47d73e3e8ac70a8c878e4b003034715de454933583c2b0b56e0da7c1bef53\"" Feb 13 20:22:37.401355 containerd[1523]: time="2025-02-13T20:22:37.399782699Z" level=info msg="StartContainer for \"b9c47d73e3e8ac70a8c878e4b003034715de454933583c2b0b56e0da7c1bef53\"" Feb 13 20:22:37.437141 systemd[1]: run-containerd-runc-k8s.io-b9c47d73e3e8ac70a8c878e4b003034715de454933583c2b0b56e0da7c1bef53-runc.yErKOe.mount: Deactivated successfully. Feb 13 20:22:37.449725 systemd[1]: Started cri-containerd-b9c47d73e3e8ac70a8c878e4b003034715de454933583c2b0b56e0da7c1bef53.scope - libcontainer container b9c47d73e3e8ac70a8c878e4b003034715de454933583c2b0b56e0da7c1bef53. Feb 13 20:22:37.493922 containerd[1523]: time="2025-02-13T20:22:37.493870440Z" level=info msg="StartContainer for \"b9c47d73e3e8ac70a8c878e4b003034715de454933583c2b0b56e0da7c1bef53\" returns successfully" Feb 13 20:22:37.719860 kubelet[1947]: E0213 20:22:37.719344 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:38.282869 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Feb 13 20:22:38.719710 kubelet[1947]: E0213 20:22:38.719547 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:39.720728 kubelet[1947]: E0213 20:22:39.720666 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:40.723028 kubelet[1947]: E0213 20:22:40.722549 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:41.418870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3937371496.mount: Deactivated successfully. Feb 13 20:22:41.723061 kubelet[1947]: E0213 20:22:41.722868 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:42.724050 kubelet[1947]: E0213 20:22:42.723969 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:43.344779 containerd[1523]: time="2025-02-13T20:22:43.344710574Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:43.346384 containerd[1523]: time="2025-02-13T20:22:43.346126790Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 20:22:43.347526 containerd[1523]: time="2025-02-13T20:22:43.347134449Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:43.354335 containerd[1523]: time="2025-02-13T20:22:43.354265762Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:43.358857 containerd[1523]: time="2025-02-13T20:22:43.358818885Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 5.988199814s" Feb 13 20:22:43.359028 containerd[1523]: time="2025-02-13T20:22:43.358999267Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:22:43.360897 containerd[1523]: time="2025-02-13T20:22:43.360663982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:22:43.371610 containerd[1523]: time="2025-02-13T20:22:43.371539350Z" level=info msg="CreateContainer within sandbox \"a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 20:22:43.390536 containerd[1523]: time="2025-02-13T20:22:43.390499379Z" level=info msg="CreateContainer within sandbox \"a586b352fc543b4b8ac6cfeba1d639c4d70c121a4d19841340861caffbf3fab2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"babdcd5d01e83603293f5835acc6328c16be8fed45e03d0df4c3b1f1676d9663\"" Feb 13 20:22:43.391132 containerd[1523]: time="2025-02-13T20:22:43.391018362Z" level=info msg="StartContainer for \"babdcd5d01e83603293f5835acc6328c16be8fed45e03d0df4c3b1f1676d9663\"" Feb 13 20:22:43.459730 systemd[1]: Started cri-containerd-babdcd5d01e83603293f5835acc6328c16be8fed45e03d0df4c3b1f1676d9663.scope - libcontainer container babdcd5d01e83603293f5835acc6328c16be8fed45e03d0df4c3b1f1676d9663. Feb 13 20:22:43.497904 containerd[1523]: time="2025-02-13T20:22:43.497756033Z" level=info msg="StartContainer for \"babdcd5d01e83603293f5835acc6328c16be8fed45e03d0df4c3b1f1676d9663\" returns successfully" Feb 13 20:22:43.724900 kubelet[1947]: E0213 20:22:43.724255 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:44.156201 kubelet[1947]: I0213 20:22:44.156127 1947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-rq72x" podStartSLOduration=9.485359172 podStartE2EDuration="17.156109559s" podCreationTimestamp="2025-02-13 20:22:27 +0000 UTC" firstStartedPulling="2025-02-13 20:22:35.68939106 +0000 UTC m=+28.870204692" lastFinishedPulling="2025-02-13 20:22:43.360141447 +0000 UTC m=+36.540955079" observedRunningTime="2025-02-13 20:22:44.155550846 +0000 UTC m=+37.336364497" watchObservedRunningTime="2025-02-13 20:22:44.156109559 +0000 UTC m=+37.336923199" Feb 13 20:22:44.724624 kubelet[1947]: E0213 20:22:44.724465 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:45.035774 containerd[1523]: time="2025-02-13T20:22:45.035719146Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:45.037464 containerd[1523]: time="2025-02-13T20:22:45.037111351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:22:45.037464 containerd[1523]: time="2025-02-13T20:22:45.037406890Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:45.040322 containerd[1523]: time="2025-02-13T20:22:45.040226474Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:22:45.041693 containerd[1523]: time="2025-02-13T20:22:45.041541615Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.680641093s" Feb 13 20:22:45.041693 containerd[1523]: time="2025-02-13T20:22:45.041583817Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:22:45.044789 containerd[1523]: time="2025-02-13T20:22:45.044607714Z" level=info msg="CreateContainer within sandbox \"2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:22:45.099635 containerd[1523]: time="2025-02-13T20:22:45.099590645Z" level=info msg="CreateContainer within sandbox \"2e7b1e40a33b37328fe331b3def13f988f13b0a53a06560252d08cd242872d8c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"0c7567206776aadad83e5c1a6040076829dc493701d522d8a3acbb7ffb199281\"" Feb 13 20:22:45.100710 containerd[1523]: time="2025-02-13T20:22:45.100661976Z" level=info msg="StartContainer for \"0c7567206776aadad83e5c1a6040076829dc493701d522d8a3acbb7ffb199281\"" Feb 13 20:22:45.156697 systemd[1]: Started cri-containerd-0c7567206776aadad83e5c1a6040076829dc493701d522d8a3acbb7ffb199281.scope - libcontainer container 0c7567206776aadad83e5c1a6040076829dc493701d522d8a3acbb7ffb199281. Feb 13 20:22:45.224786 containerd[1523]: time="2025-02-13T20:22:45.224597683Z" level=info msg="StartContainer for \"0c7567206776aadad83e5c1a6040076829dc493701d522d8a3acbb7ffb199281\" returns successfully" Feb 13 20:22:45.725479 kubelet[1947]: E0213 20:22:45.725409 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:45.838388 kubelet[1947]: I0213 20:22:45.838327 1947 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:22:45.838388 kubelet[1947]: I0213 20:22:45.838401 1947 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:22:46.219593 kubelet[1947]: I0213 20:22:46.219463 1947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-spcjx" podStartSLOduration=29.775849515 podStartE2EDuration="39.219414619s" podCreationTimestamp="2025-02-13 20:22:07 +0000 UTC" firstStartedPulling="2025-02-13 20:22:35.599063841 +0000 UTC m=+28.779877467" lastFinishedPulling="2025-02-13 20:22:45.042628944 +0000 UTC m=+38.223442571" observedRunningTime="2025-02-13 20:22:46.218572287 +0000 UTC m=+39.399385933" watchObservedRunningTime="2025-02-13 20:22:46.219414619 +0000 UTC m=+39.400228259" Feb 13 20:22:46.726132 kubelet[1947]: E0213 20:22:46.726041 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:47.679510 kubelet[1947]: E0213 20:22:47.679438 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:47.726322 kubelet[1947]: E0213 20:22:47.726235 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:48.726967 kubelet[1947]: E0213 20:22:48.726887 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:49.728184 kubelet[1947]: E0213 20:22:49.728099 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:50.729415 kubelet[1947]: E0213 20:22:50.729340 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:51.730575 kubelet[1947]: E0213 20:22:51.730475 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:52.731331 kubelet[1947]: E0213 20:22:52.731262 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:53.731709 kubelet[1947]: E0213 20:22:53.731622 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:54.726683 systemd[1]: Created slice kubepods-besteffort-pod00e464df_fb55_4ef5_ad01_8dbfb7164e98.slice - libcontainer container kubepods-besteffort-pod00e464df_fb55_4ef5_ad01_8dbfb7164e98.slice. Feb 13 20:22:54.731842 kubelet[1947]: E0213 20:22:54.731787 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:54.817811 kubelet[1947]: I0213 20:22:54.817671 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/00e464df-fb55-4ef5-ad01-8dbfb7164e98-data\") pod \"nfs-server-provisioner-0\" (UID: \"00e464df-fb55-4ef5-ad01-8dbfb7164e98\") " pod="default/nfs-server-provisioner-0" Feb 13 20:22:54.817811 kubelet[1947]: I0213 20:22:54.817766 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gmvk6\" (UniqueName: \"kubernetes.io/projected/00e464df-fb55-4ef5-ad01-8dbfb7164e98-kube-api-access-gmvk6\") pod \"nfs-server-provisioner-0\" (UID: \"00e464df-fb55-4ef5-ad01-8dbfb7164e98\") " pod="default/nfs-server-provisioner-0" Feb 13 20:22:55.032346 containerd[1523]: time="2025-02-13T20:22:55.031816671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:00e464df-fb55-4ef5-ad01-8dbfb7164e98,Namespace:default,Attempt:0,}" Feb 13 20:22:55.214423 systemd-networkd[1449]: cali60e51b789ff: Link UP Feb 13 20:22:55.218652 systemd-networkd[1449]: cali60e51b789ff: Gained carrier Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.111 [INFO][3740] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.12.214-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 00e464df-fb55-4ef5-ad01-8dbfb7164e98 1576 0 2025-02-13 20:22:54 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.12.214 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.12.214-k8s-nfs--server--provisioner--0-" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.111 [INFO][3740] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.155 [INFO][3751] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" HandleID="k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Workload="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.168 [INFO][3751] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" HandleID="k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Workload="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319830), Attrs:map[string]string{"namespace":"default", "node":"10.230.12.214", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 20:22:55.155463591 +0000 UTC"}, Hostname:"10.230.12.214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.168 [INFO][3751] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.169 [INFO][3751] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.169 [INFO][3751] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.12.214' Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.172 [INFO][3751] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.178 [INFO][3751] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.185 [INFO][3751] ipam/ipam.go 489: Trying affinity for 192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.188 [INFO][3751] ipam/ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.191 [INFO][3751] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.191 [INFO][3751] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.194 [INFO][3751] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.200 [INFO][3751] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.207 [INFO][3751] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.113.131/26] block=192.168.113.128/26 handle="k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.207 [INFO][3751] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.131/26] handle="k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" host="10.230.12.214" Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.207 [INFO][3751] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:22:55.237826 containerd[1523]: 2025-02-13 20:22:55.207 [INFO][3751] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.131/26] IPv6=[] ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" HandleID="k8s-pod-network.7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Workload="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:22:55.238931 containerd[1523]: 2025-02-13 20:22:55.209 [INFO][3740] cni-plugin/k8s.go 386: Populated endpoint ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.12.214-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"00e464df-fb55-4ef5-ad01-8dbfb7164e98", ResourceVersion:"1576", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.12.214", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:22:55.238931 containerd[1523]: 2025-02-13 20:22:55.210 [INFO][3740] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.113.131/32] ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:22:55.238931 containerd[1523]: 2025-02-13 20:22:55.210 [INFO][3740] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:22:55.238931 containerd[1523]: 2025-02-13 20:22:55.219 [INFO][3740] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:22:55.239232 containerd[1523]: 2025-02-13 20:22:55.220 [INFO][3740] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.12.214-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"00e464df-fb55-4ef5-ad01-8dbfb7164e98", ResourceVersion:"1576", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 22, 54, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.12.214", ContainerID:"7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.113.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"a2:32:bd:18:08:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:22:55.239232 containerd[1523]: 2025-02-13 20:22:55.231 [INFO][3740] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.12.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:22:55.280921 containerd[1523]: time="2025-02-13T20:22:55.279804191Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:22:55.280921 containerd[1523]: time="2025-02-13T20:22:55.279923903Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:22:55.280921 containerd[1523]: time="2025-02-13T20:22:55.279959193Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:55.280921 containerd[1523]: time="2025-02-13T20:22:55.280077176Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:22:55.313710 systemd[1]: Started cri-containerd-7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca.scope - libcontainer container 7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca. Feb 13 20:22:55.375737 containerd[1523]: time="2025-02-13T20:22:55.375647332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:00e464df-fb55-4ef5-ad01-8dbfb7164e98,Namespace:default,Attempt:0,} returns sandbox id \"7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca\"" Feb 13 20:22:55.378742 containerd[1523]: time="2025-02-13T20:22:55.378713019Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 20:22:55.732359 kubelet[1947]: E0213 20:22:55.732151 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:56.523827 systemd-networkd[1449]: cali60e51b789ff: Gained IPv6LL Feb 13 20:22:56.733106 kubelet[1947]: E0213 20:22:56.733022 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:57.734284 kubelet[1947]: E0213 20:22:57.734175 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:58.525901 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount592442740.mount: Deactivated successfully. Feb 13 20:22:58.735278 kubelet[1947]: E0213 20:22:58.735207 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:22:59.736495 kubelet[1947]: E0213 20:22:59.736409 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:00.740478 kubelet[1947]: E0213 20:23:00.740402 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:01.470791 containerd[1523]: time="2025-02-13T20:23:01.469277922Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:23:01.470791 containerd[1523]: time="2025-02-13T20:23:01.470676146Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Feb 13 20:23:01.470791 containerd[1523]: time="2025-02-13T20:23:01.470722398Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:23:01.474430 containerd[1523]: time="2025-02-13T20:23:01.474390412Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:23:01.476191 containerd[1523]: time="2025-02-13T20:23:01.476138119Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.097345231s" Feb 13 20:23:01.476285 containerd[1523]: time="2025-02-13T20:23:01.476194103Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 20:23:01.480031 containerd[1523]: time="2025-02-13T20:23:01.479996086Z" level=info msg="CreateContainer within sandbox \"7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 20:23:01.498653 containerd[1523]: time="2025-02-13T20:23:01.498573732Z" level=info msg="CreateContainer within sandbox \"7c5003749d6ca8bb07e894afedd4321067f37853b0a2a44e5523a106267e2fca\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"8b012f32191c2fcb3c225ff7b7238aff21372440ea7dfebb8e4613856663529d\"" Feb 13 20:23:01.499989 containerd[1523]: time="2025-02-13T20:23:01.499944561Z" level=info msg="StartContainer for \"8b012f32191c2fcb3c225ff7b7238aff21372440ea7dfebb8e4613856663529d\"" Feb 13 20:23:01.592753 systemd[1]: Started cri-containerd-8b012f32191c2fcb3c225ff7b7238aff21372440ea7dfebb8e4613856663529d.scope - libcontainer container 8b012f32191c2fcb3c225ff7b7238aff21372440ea7dfebb8e4613856663529d. Feb 13 20:23:01.630709 containerd[1523]: time="2025-02-13T20:23:01.630353205Z" level=info msg="StartContainer for \"8b012f32191c2fcb3c225ff7b7238aff21372440ea7dfebb8e4613856663529d\" returns successfully" Feb 13 20:23:01.741235 kubelet[1947]: E0213 20:23:01.741065 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:02.273357 kubelet[1947]: I0213 20:23:02.273271 1947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.173117861 podStartE2EDuration="8.273239226s" podCreationTimestamp="2025-02-13 20:22:54 +0000 UTC" firstStartedPulling="2025-02-13 20:22:55.377978406 +0000 UTC m=+48.558792032" lastFinishedPulling="2025-02-13 20:23:01.478099764 +0000 UTC m=+54.658913397" observedRunningTime="2025-02-13 20:23:02.27280339 +0000 UTC m=+55.453617051" watchObservedRunningTime="2025-02-13 20:23:02.273239226 +0000 UTC m=+55.454052866" Feb 13 20:23:02.742267 kubelet[1947]: E0213 20:23:02.741743 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:03.742034 kubelet[1947]: E0213 20:23:03.741953 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:04.742962 kubelet[1947]: E0213 20:23:04.742880 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:05.743890 kubelet[1947]: E0213 20:23:05.743824 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:06.744634 kubelet[1947]: E0213 20:23:06.744550 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:07.679358 kubelet[1947]: E0213 20:23:07.679267 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:07.708059 containerd[1523]: time="2025-02-13T20:23:07.707920703Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:23:07.708895 containerd[1523]: time="2025-02-13T20:23:07.708802236Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:23:07.708895 containerd[1523]: time="2025-02-13T20:23:07.708833907Z" level=info msg="StopPodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:23:07.725043 containerd[1523]: time="2025-02-13T20:23:07.724832008Z" level=info msg="RemovePodSandbox for \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:23:07.733154 containerd[1523]: time="2025-02-13T20:23:07.732872081Z" level=info msg="Forcibly stopping sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\"" Feb 13 20:23:07.742242 containerd[1523]: time="2025-02-13T20:23:07.733029941Z" level=info msg="TearDown network for sandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" successfully" Feb 13 20:23:07.745034 kubelet[1947]: E0213 20:23:07.744965 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:07.756941 containerd[1523]: time="2025-02-13T20:23:07.756839463Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.757149 containerd[1523]: time="2025-02-13T20:23:07.756968300Z" level=info msg="RemovePodSandbox \"d500c08499b34bb9523b313ff7cc26fb3c87fd92efc4db58167552a68d6feeeb\" returns successfully" Feb 13 20:23:07.758184 containerd[1523]: time="2025-02-13T20:23:07.758109810Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:23:07.758300 containerd[1523]: time="2025-02-13T20:23:07.758268525Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:23:07.758300 containerd[1523]: time="2025-02-13T20:23:07.758289631Z" level=info msg="StopPodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:23:07.759286 containerd[1523]: time="2025-02-13T20:23:07.758985704Z" level=info msg="RemovePodSandbox for \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:23:07.759286 containerd[1523]: time="2025-02-13T20:23:07.759024590Z" level=info msg="Forcibly stopping sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\"" Feb 13 20:23:07.759286 containerd[1523]: time="2025-02-13T20:23:07.759127376Z" level=info msg="TearDown network for sandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" successfully" Feb 13 20:23:07.762439 containerd[1523]: time="2025-02-13T20:23:07.762284986Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.762439 containerd[1523]: time="2025-02-13T20:23:07.762341300Z" level=info msg="RemovePodSandbox \"08cd905086cade080606a39b76e4d3e8cd195419a2e46a03fcad12097c2925a7\" returns successfully" Feb 13 20:23:07.762823 containerd[1523]: time="2025-02-13T20:23:07.762747304Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:23:07.763090 containerd[1523]: time="2025-02-13T20:23:07.762923089Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:23:07.763090 containerd[1523]: time="2025-02-13T20:23:07.762947403Z" level=info msg="StopPodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:23:07.763923 containerd[1523]: time="2025-02-13T20:23:07.763836400Z" level=info msg="RemovePodSandbox for \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:23:07.764000 containerd[1523]: time="2025-02-13T20:23:07.763953621Z" level=info msg="Forcibly stopping sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\"" Feb 13 20:23:07.764109 containerd[1523]: time="2025-02-13T20:23:07.764063360Z" level=info msg="TearDown network for sandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" successfully" Feb 13 20:23:07.766696 containerd[1523]: time="2025-02-13T20:23:07.766596426Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.766696 containerd[1523]: time="2025-02-13T20:23:07.766662831Z" level=info msg="RemovePodSandbox \"04213f4804c0b80dbef662296c36a2cd9161452ca505185295b7cff5ddbb8c13\" returns successfully" Feb 13 20:23:07.767411 containerd[1523]: time="2025-02-13T20:23:07.767192372Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:23:07.767411 containerd[1523]: time="2025-02-13T20:23:07.767309812Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:23:07.767411 containerd[1523]: time="2025-02-13T20:23:07.767329962Z" level=info msg="StopPodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:23:07.768106 containerd[1523]: time="2025-02-13T20:23:07.768073034Z" level=info msg="RemovePodSandbox for \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:23:07.768175 containerd[1523]: time="2025-02-13T20:23:07.768111046Z" level=info msg="Forcibly stopping sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\"" Feb 13 20:23:07.768300 containerd[1523]: time="2025-02-13T20:23:07.768253629Z" level=info msg="TearDown network for sandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" successfully" Feb 13 20:23:07.772746 containerd[1523]: time="2025-02-13T20:23:07.772685490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.772868 containerd[1523]: time="2025-02-13T20:23:07.772759197Z" level=info msg="RemovePodSandbox \"45a19fc16cf13d7e5c6b2ced71d847b5d9d935fc975aae735b9c26b075b1f2c9\" returns successfully" Feb 13 20:23:07.773238 containerd[1523]: time="2025-02-13T20:23:07.773154291Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:23:07.773757 containerd[1523]: time="2025-02-13T20:23:07.773407123Z" level=info msg="TearDown network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" successfully" Feb 13 20:23:07.773757 containerd[1523]: time="2025-02-13T20:23:07.773430228Z" level=info msg="StopPodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" returns successfully" Feb 13 20:23:07.773869 containerd[1523]: time="2025-02-13T20:23:07.773839018Z" level=info msg="RemovePodSandbox for \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:23:07.773920 containerd[1523]: time="2025-02-13T20:23:07.773868964Z" level=info msg="Forcibly stopping sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\"" Feb 13 20:23:07.774001 containerd[1523]: time="2025-02-13T20:23:07.773954653Z" level=info msg="TearDown network for sandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" successfully" Feb 13 20:23:07.776642 containerd[1523]: time="2025-02-13T20:23:07.776591001Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.776962 containerd[1523]: time="2025-02-13T20:23:07.776645106Z" level=info msg="RemovePodSandbox \"d29ac56b21941840b70ed278b8bc31d6ae73a7106ac02fb2924e6970fba607c6\" returns successfully" Feb 13 20:23:07.777589 containerd[1523]: time="2025-02-13T20:23:07.777148581Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" Feb 13 20:23:07.777589 containerd[1523]: time="2025-02-13T20:23:07.777266160Z" level=info msg="TearDown network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" successfully" Feb 13 20:23:07.777589 containerd[1523]: time="2025-02-13T20:23:07.777285875Z" level=info msg="StopPodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" returns successfully" Feb 13 20:23:07.778550 containerd[1523]: time="2025-02-13T20:23:07.778005463Z" level=info msg="RemovePodSandbox for \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" Feb 13 20:23:07.778550 containerd[1523]: time="2025-02-13T20:23:07.778123423Z" level=info msg="Forcibly stopping sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\"" Feb 13 20:23:07.778550 containerd[1523]: time="2025-02-13T20:23:07.778221475Z" level=info msg="TearDown network for sandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" successfully" Feb 13 20:23:07.781369 containerd[1523]: time="2025-02-13T20:23:07.781320117Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.781585 containerd[1523]: time="2025-02-13T20:23:07.781381937Z" level=info msg="RemovePodSandbox \"f44ab2bf4a1364ad62de858f8303064c58068f98cfae29f32725d97dfba85a84\" returns successfully" Feb 13 20:23:07.782300 containerd[1523]: time="2025-02-13T20:23:07.782193309Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\"" Feb 13 20:23:07.782398 containerd[1523]: time="2025-02-13T20:23:07.782318854Z" level=info msg="TearDown network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" successfully" Feb 13 20:23:07.782398 containerd[1523]: time="2025-02-13T20:23:07.782339736Z" level=info msg="StopPodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" returns successfully" Feb 13 20:23:07.783513 containerd[1523]: time="2025-02-13T20:23:07.782694375Z" level=info msg="RemovePodSandbox for \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\"" Feb 13 20:23:07.783513 containerd[1523]: time="2025-02-13T20:23:07.782741199Z" level=info msg="Forcibly stopping sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\"" Feb 13 20:23:07.783513 containerd[1523]: time="2025-02-13T20:23:07.782840666Z" level=info msg="TearDown network for sandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" successfully" Feb 13 20:23:07.786055 containerd[1523]: time="2025-02-13T20:23:07.785993507Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.786261 containerd[1523]: time="2025-02-13T20:23:07.786059742Z" level=info msg="RemovePodSandbox \"7a27b8002290a7e84e5fb99f2b924226eb2269ad954b5e3d86e6cc35ebb66637\" returns successfully" Feb 13 20:23:07.787236 containerd[1523]: time="2025-02-13T20:23:07.786784315Z" level=info msg="StopPodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\"" Feb 13 20:23:07.787236 containerd[1523]: time="2025-02-13T20:23:07.786908249Z" level=info msg="TearDown network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" successfully" Feb 13 20:23:07.787236 containerd[1523]: time="2025-02-13T20:23:07.786929024Z" level=info msg="StopPodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" returns successfully" Feb 13 20:23:07.788040 containerd[1523]: time="2025-02-13T20:23:07.787816795Z" level=info msg="RemovePodSandbox for \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\"" Feb 13 20:23:07.788040 containerd[1523]: time="2025-02-13T20:23:07.787852923Z" level=info msg="Forcibly stopping sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\"" Feb 13 20:23:07.788040 containerd[1523]: time="2025-02-13T20:23:07.787963134Z" level=info msg="TearDown network for sandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" successfully" Feb 13 20:23:07.791590 containerd[1523]: time="2025-02-13T20:23:07.791477232Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.791590 containerd[1523]: time="2025-02-13T20:23:07.791557597Z" level=info msg="RemovePodSandbox \"f8110cd13dfca8eabefce5d7e0c386bde821567802eded0be7a5116666d1d02f\" returns successfully" Feb 13 20:23:07.792131 containerd[1523]: time="2025-02-13T20:23:07.792068429Z" level=info msg="StopPodSandbox for \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\"" Feb 13 20:23:07.792370 containerd[1523]: time="2025-02-13T20:23:07.792327679Z" level=info msg="TearDown network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\" successfully" Feb 13 20:23:07.792370 containerd[1523]: time="2025-02-13T20:23:07.792359591Z" level=info msg="StopPodSandbox for \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\" returns successfully" Feb 13 20:23:07.792882 containerd[1523]: time="2025-02-13T20:23:07.792830577Z" level=info msg="RemovePodSandbox for \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\"" Feb 13 20:23:07.793051 containerd[1523]: time="2025-02-13T20:23:07.792869613Z" level=info msg="Forcibly stopping sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\"" Feb 13 20:23:07.793051 containerd[1523]: time="2025-02-13T20:23:07.793013087Z" level=info msg="TearDown network for sandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\" successfully" Feb 13 20:23:07.795723 containerd[1523]: time="2025-02-13T20:23:07.795668771Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.795817 containerd[1523]: time="2025-02-13T20:23:07.795732165Z" level=info msg="RemovePodSandbox \"558a5734dfc1f14372e07fc935e491a1b134a46790ece74765b109996159ad06\" returns successfully" Feb 13 20:23:07.796429 containerd[1523]: time="2025-02-13T20:23:07.796358877Z" level=info msg="StopPodSandbox for \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\"" Feb 13 20:23:07.796681 containerd[1523]: time="2025-02-13T20:23:07.796647964Z" level=info msg="TearDown network for sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\" successfully" Feb 13 20:23:07.796681 containerd[1523]: time="2025-02-13T20:23:07.796677672Z" level=info msg="StopPodSandbox for \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\" returns successfully" Feb 13 20:23:07.797844 containerd[1523]: time="2025-02-13T20:23:07.797148712Z" level=info msg="RemovePodSandbox for \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\"" Feb 13 20:23:07.797844 containerd[1523]: time="2025-02-13T20:23:07.797206704Z" level=info msg="Forcibly stopping sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\"" Feb 13 20:23:07.797844 containerd[1523]: time="2025-02-13T20:23:07.797322694Z" level=info msg="TearDown network for sandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\" successfully" Feb 13 20:23:07.800411 containerd[1523]: time="2025-02-13T20:23:07.800353923Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.800634 containerd[1523]: time="2025-02-13T20:23:07.800425119Z" level=info msg="RemovePodSandbox \"e4bdb695e659bb0cb3f3c66355dc4bd2eb2badc40db9ab257bceae2706bf2cb4\" returns successfully" Feb 13 20:23:07.801688 containerd[1523]: time="2025-02-13T20:23:07.801386449Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:23:07.801688 containerd[1523]: time="2025-02-13T20:23:07.801530658Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:23:07.801688 containerd[1523]: time="2025-02-13T20:23:07.801553587Z" level=info msg="StopPodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:23:07.803300 containerd[1523]: time="2025-02-13T20:23:07.802094580Z" level=info msg="RemovePodSandbox for \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:23:07.803300 containerd[1523]: time="2025-02-13T20:23:07.802138412Z" level=info msg="Forcibly stopping sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\"" Feb 13 20:23:07.803300 containerd[1523]: time="2025-02-13T20:23:07.802237315Z" level=info msg="TearDown network for sandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" successfully" Feb 13 20:23:07.805758 containerd[1523]: time="2025-02-13T20:23:07.805049422Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.805758 containerd[1523]: time="2025-02-13T20:23:07.805116114Z" level=info msg="RemovePodSandbox \"3be3d8f70bb8a4944db627b34b0e3907b1317dd2e182b513a51eff96e17e8fe4\" returns successfully" Feb 13 20:23:07.806256 containerd[1523]: time="2025-02-13T20:23:07.806021557Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:23:07.806256 containerd[1523]: time="2025-02-13T20:23:07.806154566Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:23:07.806256 containerd[1523]: time="2025-02-13T20:23:07.806176308Z" level=info msg="StopPodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:23:07.809365 containerd[1523]: time="2025-02-13T20:23:07.809302509Z" level=info msg="RemovePodSandbox for \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:23:07.809365 containerd[1523]: time="2025-02-13T20:23:07.809344740Z" level=info msg="Forcibly stopping sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\"" Feb 13 20:23:07.809545 containerd[1523]: time="2025-02-13T20:23:07.809442537Z" level=info msg="TearDown network for sandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" successfully" Feb 13 20:23:07.815060 containerd[1523]: time="2025-02-13T20:23:07.814880044Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.815060 containerd[1523]: time="2025-02-13T20:23:07.814934905Z" level=info msg="RemovePodSandbox \"51e9615480ba3997a5813ddf5d7ed0ce5949007afdbcf35b99a53fb5c03f46a7\" returns successfully" Feb 13 20:23:07.815685 containerd[1523]: time="2025-02-13T20:23:07.815651145Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:23:07.817081 containerd[1523]: time="2025-02-13T20:23:07.816695504Z" level=info msg="TearDown network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" successfully" Feb 13 20:23:07.817081 containerd[1523]: time="2025-02-13T20:23:07.816737769Z" level=info msg="StopPodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" returns successfully" Feb 13 20:23:07.819525 containerd[1523]: time="2025-02-13T20:23:07.819398350Z" level=info msg="RemovePodSandbox for \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:23:07.819525 containerd[1523]: time="2025-02-13T20:23:07.819435839Z" level=info msg="Forcibly stopping sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\"" Feb 13 20:23:07.819670 containerd[1523]: time="2025-02-13T20:23:07.819608038Z" level=info msg="TearDown network for sandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" successfully" Feb 13 20:23:07.824935 containerd[1523]: time="2025-02-13T20:23:07.824879700Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.825251 containerd[1523]: time="2025-02-13T20:23:07.824944131Z" level=info msg="RemovePodSandbox \"3e29726058133f8aa6ffe75c90401b5adf759d960911b847817980277205afb2\" returns successfully" Feb 13 20:23:07.825331 containerd[1523]: time="2025-02-13T20:23:07.825294251Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" Feb 13 20:23:07.825462 containerd[1523]: time="2025-02-13T20:23:07.825400766Z" level=info msg="TearDown network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" successfully" Feb 13 20:23:07.825462 containerd[1523]: time="2025-02-13T20:23:07.825429109Z" level=info msg="StopPodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" returns successfully" Feb 13 20:23:07.827478 containerd[1523]: time="2025-02-13T20:23:07.826044822Z" level=info msg="RemovePodSandbox for \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" Feb 13 20:23:07.827478 containerd[1523]: time="2025-02-13T20:23:07.826090160Z" level=info msg="Forcibly stopping sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\"" Feb 13 20:23:07.827478 containerd[1523]: time="2025-02-13T20:23:07.826201048Z" level=info msg="TearDown network for sandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" successfully" Feb 13 20:23:07.829439 containerd[1523]: time="2025-02-13T20:23:07.829404208Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.829653 containerd[1523]: time="2025-02-13T20:23:07.829622282Z" level=info msg="RemovePodSandbox \"a4da0121fee5a142aa83efddc5c5b071d610647f390c8f6ca41a280e45b26224\" returns successfully" Feb 13 20:23:07.830178 containerd[1523]: time="2025-02-13T20:23:07.830134359Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\"" Feb 13 20:23:07.830315 containerd[1523]: time="2025-02-13T20:23:07.830289335Z" level=info msg="TearDown network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" successfully" Feb 13 20:23:07.830380 containerd[1523]: time="2025-02-13T20:23:07.830316427Z" level=info msg="StopPodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" returns successfully" Feb 13 20:23:07.832052 containerd[1523]: time="2025-02-13T20:23:07.830867767Z" level=info msg="RemovePodSandbox for \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\"" Feb 13 20:23:07.832052 containerd[1523]: time="2025-02-13T20:23:07.830907924Z" level=info msg="Forcibly stopping sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\"" Feb 13 20:23:07.832052 containerd[1523]: time="2025-02-13T20:23:07.830999520Z" level=info msg="TearDown network for sandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" successfully" Feb 13 20:23:07.833662 containerd[1523]: time="2025-02-13T20:23:07.833626977Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.833834 containerd[1523]: time="2025-02-13T20:23:07.833805666Z" level=info msg="RemovePodSandbox \"ff23240127f5d7b31d327d03ea94725ef042acc5773aea3c5e6c9ad086d0ecc4\" returns successfully" Feb 13 20:23:07.834462 containerd[1523]: time="2025-02-13T20:23:07.834407752Z" level=info msg="StopPodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\"" Feb 13 20:23:07.834606 containerd[1523]: time="2025-02-13T20:23:07.834570563Z" level=info msg="TearDown network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" successfully" Feb 13 20:23:07.834678 containerd[1523]: time="2025-02-13T20:23:07.834606650Z" level=info msg="StopPodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" returns successfully" Feb 13 20:23:07.835079 containerd[1523]: time="2025-02-13T20:23:07.835046975Z" level=info msg="RemovePodSandbox for \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\"" Feb 13 20:23:07.835192 containerd[1523]: time="2025-02-13T20:23:07.835083810Z" level=info msg="Forcibly stopping sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\"" Feb 13 20:23:07.835249 containerd[1523]: time="2025-02-13T20:23:07.835168601Z" level=info msg="TearDown network for sandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" successfully" Feb 13 20:23:07.837817 containerd[1523]: time="2025-02-13T20:23:07.837755570Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.837817 containerd[1523]: time="2025-02-13T20:23:07.837811432Z" level=info msg="RemovePodSandbox \"eae7b820293b2c19210e91b0208764938ce9fef81a97f0e6dbb8829a9e453bef\" returns successfully" Feb 13 20:23:07.838645 containerd[1523]: time="2025-02-13T20:23:07.838612737Z" level=info msg="StopPodSandbox for \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\"" Feb 13 20:23:07.839183 containerd[1523]: time="2025-02-13T20:23:07.838892023Z" level=info msg="TearDown network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\" successfully" Feb 13 20:23:07.839183 containerd[1523]: time="2025-02-13T20:23:07.838918578Z" level=info msg="StopPodSandbox for \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\" returns successfully" Feb 13 20:23:07.839302 containerd[1523]: time="2025-02-13T20:23:07.839266138Z" level=info msg="RemovePodSandbox for \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\"" Feb 13 20:23:07.839351 containerd[1523]: time="2025-02-13T20:23:07.839306965Z" level=info msg="Forcibly stopping sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\"" Feb 13 20:23:07.839466 containerd[1523]: time="2025-02-13T20:23:07.839416737Z" level=info msg="TearDown network for sandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\" successfully" Feb 13 20:23:07.842319 containerd[1523]: time="2025-02-13T20:23:07.842276534Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.842545 containerd[1523]: time="2025-02-13T20:23:07.842330898Z" level=info msg="RemovePodSandbox \"c6ea882897146da0c5a4577ee5c9bbf663f1a975bcaf72df08789b03f8b8cbc4\" returns successfully" Feb 13 20:23:07.843298 containerd[1523]: time="2025-02-13T20:23:07.843209151Z" level=info msg="StopPodSandbox for \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\"" Feb 13 20:23:07.843682 containerd[1523]: time="2025-02-13T20:23:07.843550606Z" level=info msg="TearDown network for sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\" successfully" Feb 13 20:23:07.843682 containerd[1523]: time="2025-02-13T20:23:07.843576272Z" level=info msg="StopPodSandbox for \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\" returns successfully" Feb 13 20:23:07.844822 containerd[1523]: time="2025-02-13T20:23:07.843912686Z" level=info msg="RemovePodSandbox for \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\"" Feb 13 20:23:07.844822 containerd[1523]: time="2025-02-13T20:23:07.843942310Z" level=info msg="Forcibly stopping sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\"" Feb 13 20:23:07.844822 containerd[1523]: time="2025-02-13T20:23:07.844024356Z" level=info msg="TearDown network for sandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\" successfully" Feb 13 20:23:07.847031 containerd[1523]: time="2025-02-13T20:23:07.846979912Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:23:07.847097 containerd[1523]: time="2025-02-13T20:23:07.847037642Z" level=info msg="RemovePodSandbox \"10890ed60743e8281e24b5c1930e340d0597df2f267536362a721ba1ab1f7cc4\" returns successfully" Feb 13 20:23:08.746082 kubelet[1947]: E0213 20:23:08.745980 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:09.747040 kubelet[1947]: E0213 20:23:09.746913 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:10.748390 kubelet[1947]: E0213 20:23:10.748228 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:11.570100 systemd[1]: Created slice kubepods-besteffort-podde3bcc96_5221_4c49_9577_787e1dcd1948.slice - libcontainer container kubepods-besteffort-podde3bcc96_5221_4c49_9577_787e1dcd1948.slice. Feb 13 20:23:11.625246 kubelet[1947]: I0213 20:23:11.624872 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-be19d203-43c0-4945-8fb0-55091ada4a46\" (UniqueName: \"kubernetes.io/nfs/de3bcc96-5221-4c49-9577-787e1dcd1948-pvc-be19d203-43c0-4945-8fb0-55091ada4a46\") pod \"test-pod-1\" (UID: \"de3bcc96-5221-4c49-9577-787e1dcd1948\") " pod="default/test-pod-1" Feb 13 20:23:11.625246 kubelet[1947]: I0213 20:23:11.625249 1947 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jcqvk\" (UniqueName: \"kubernetes.io/projected/de3bcc96-5221-4c49-9577-787e1dcd1948-kube-api-access-jcqvk\") pod \"test-pod-1\" (UID: \"de3bcc96-5221-4c49-9577-787e1dcd1948\") " pod="default/test-pod-1" Feb 13 20:23:11.749451 kubelet[1947]: E0213 20:23:11.749385 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:11.778674 kernel: FS-Cache: Loaded Feb 13 20:23:11.857060 kernel: RPC: Registered named UNIX socket transport module. Feb 13 20:23:11.857206 kernel: RPC: Registered udp transport module. Feb 13 20:23:11.858527 kernel: RPC: Registered tcp transport module. Feb 13 20:23:11.858643 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 20:23:11.859718 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 20:23:12.139163 kernel: NFS: Registering the id_resolver key type Feb 13 20:23:12.139466 kernel: Key type id_resolver registered Feb 13 20:23:12.139608 kernel: Key type id_legacy registered Feb 13 20:23:12.192512 nfsidmap[3961]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 20:23:12.200692 nfsidmap[3964]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Feb 13 20:23:12.475760 containerd[1523]: time="2025-02-13T20:23:12.475560329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:de3bcc96-5221-4c49-9577-787e1dcd1948,Namespace:default,Attempt:0,}" Feb 13 20:23:12.655700 systemd-networkd[1449]: cali5ec59c6bf6e: Link UP Feb 13 20:23:12.656035 systemd-networkd[1449]: cali5ec59c6bf6e: Gained carrier Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.543 [INFO][3967] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.12.214-k8s-test--pod--1-eth0 default de3bcc96-5221-4c49-9577-787e1dcd1948 1644 0 2025-02-13 20:22:56 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.12.214 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.12.214-k8s-test--pod--1-" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.543 [INFO][3967] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.12.214-k8s-test--pod--1-eth0" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.588 [INFO][3980] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" HandleID="k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Workload="10.230.12.214-k8s-test--pod--1-eth0" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.603 [INFO][3980] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" HandleID="k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Workload="10.230.12.214-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000318b50), Attrs:map[string]string{"namespace":"default", "node":"10.230.12.214", "pod":"test-pod-1", "timestamp":"2025-02-13 20:23:12.588833459 +0000 UTC"}, Hostname:"10.230.12.214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.603 [INFO][3980] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.603 [INFO][3980] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.603 [INFO][3980] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.12.214' Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.608 [INFO][3980] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.613 [INFO][3980] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.620 [INFO][3980] ipam/ipam.go 489: Trying affinity for 192.168.113.128/26 host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.623 [INFO][3980] ipam/ipam.go 155: Attempting to load block cidr=192.168.113.128/26 host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.626 [INFO][3980] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.113.128/26 host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.626 [INFO][3980] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.113.128/26 handle="k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.628 [INFO][3980] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113 Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.634 [INFO][3980] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.113.128/26 handle="k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.648 [INFO][3980] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.113.132/26] block=192.168.113.128/26 handle="k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.648 [INFO][3980] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.113.132/26] handle="k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" host="10.230.12.214" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.648 [INFO][3980] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.648 [INFO][3980] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.113.132/26] IPv6=[] ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" HandleID="k8s-pod-network.221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Workload="10.230.12.214-k8s-test--pod--1-eth0" Feb 13 20:23:12.672146 containerd[1523]: 2025-02-13 20:23:12.650 [INFO][3967] cni-plugin/k8s.go 386: Populated endpoint ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.12.214-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.12.214-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"de3bcc96-5221-4c49-9577-787e1dcd1948", ResourceVersion:"1644", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.12.214", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:23:12.675442 containerd[1523]: 2025-02-13 20:23:12.650 [INFO][3967] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.113.132/32] ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.12.214-k8s-test--pod--1-eth0" Feb 13 20:23:12.675442 containerd[1523]: 2025-02-13 20:23:12.650 [INFO][3967] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.12.214-k8s-test--pod--1-eth0" Feb 13 20:23:12.675442 containerd[1523]: 2025-02-13 20:23:12.656 [INFO][3967] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.12.214-k8s-test--pod--1-eth0" Feb 13 20:23:12.675442 containerd[1523]: 2025-02-13 20:23:12.658 [INFO][3967] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.12.214-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.12.214-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"de3bcc96-5221-4c49-9577-787e1dcd1948", ResourceVersion:"1644", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 22, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.12.214", ContainerID:"221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.113.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"d2:b7:86:b2:6c:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:23:12.675442 containerd[1523]: 2025-02-13 20:23:12.668 [INFO][3967] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.12.214-k8s-test--pod--1-eth0" Feb 13 20:23:12.705651 containerd[1523]: time="2025-02-13T20:23:12.704964937Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:23:12.705651 containerd[1523]: time="2025-02-13T20:23:12.705064920Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:23:12.705651 containerd[1523]: time="2025-02-13T20:23:12.705526887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:23:12.706118 containerd[1523]: time="2025-02-13T20:23:12.705761048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:23:12.731753 systemd[1]: Started cri-containerd-221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113.scope - libcontainer container 221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113. Feb 13 20:23:12.750718 kubelet[1947]: E0213 20:23:12.749945 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:12.798455 containerd[1523]: time="2025-02-13T20:23:12.798394398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:de3bcc96-5221-4c49-9577-787e1dcd1948,Namespace:default,Attempt:0,} returns sandbox id \"221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113\"" Feb 13 20:23:12.800118 containerd[1523]: time="2025-02-13T20:23:12.800075276Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:23:13.205623 containerd[1523]: time="2025-02-13T20:23:13.205383852Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 20:23:13.209847 containerd[1523]: time="2025-02-13T20:23:13.208738095Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 408.620383ms" Feb 13 20:23:13.209847 containerd[1523]: time="2025-02-13T20:23:13.208781671Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:23:13.212776 containerd[1523]: time="2025-02-13T20:23:13.212741494Z" level=info msg="CreateContainer within sandbox \"221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 20:23:13.213181 containerd[1523]: time="2025-02-13T20:23:13.213142093Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:23:13.230434 containerd[1523]: time="2025-02-13T20:23:13.230385559Z" level=info msg="CreateContainer within sandbox \"221cd96dcc0187977a1ed0b871ea6d244a161830ddb823c70c6415874ce92113\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"b261fb17b827a9c6c15cabe098111aa39262baaece4ae8d92917ce67f8ab28a6\"" Feb 13 20:23:13.231071 containerd[1523]: time="2025-02-13T20:23:13.230958978Z" level=info msg="StartContainer for \"b261fb17b827a9c6c15cabe098111aa39262baaece4ae8d92917ce67f8ab28a6\"" Feb 13 20:23:13.274736 systemd[1]: Started cri-containerd-b261fb17b827a9c6c15cabe098111aa39262baaece4ae8d92917ce67f8ab28a6.scope - libcontainer container b261fb17b827a9c6c15cabe098111aa39262baaece4ae8d92917ce67f8ab28a6. Feb 13 20:23:13.315336 containerd[1523]: time="2025-02-13T20:23:13.315274975Z" level=info msg="StartContainer for \"b261fb17b827a9c6c15cabe098111aa39262baaece4ae8d92917ce67f8ab28a6\" returns successfully" Feb 13 20:23:13.750477 kubelet[1947]: E0213 20:23:13.750343 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:14.326794 kubelet[1947]: I0213 20:23:14.326604 1947 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.916773831 podStartE2EDuration="18.326551133s" podCreationTimestamp="2025-02-13 20:22:56 +0000 UTC" firstStartedPulling="2025-02-13 20:23:12.799770632 +0000 UTC m=+65.980584264" lastFinishedPulling="2025-02-13 20:23:13.209547937 +0000 UTC m=+66.390361566" observedRunningTime="2025-02-13 20:23:14.325439273 +0000 UTC m=+67.506252939" watchObservedRunningTime="2025-02-13 20:23:14.326551133 +0000 UTC m=+67.507364773" Feb 13 20:23:14.443279 systemd-networkd[1449]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 20:23:14.750755 kubelet[1947]: E0213 20:23:14.750687 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:15.751277 kubelet[1947]: E0213 20:23:15.751114 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:16.751965 kubelet[1947]: E0213 20:23:16.751852 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:17.752798 kubelet[1947]: E0213 20:23:17.752632 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:18.753932 kubelet[1947]: E0213 20:23:18.753808 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:19.754837 kubelet[1947]: E0213 20:23:19.754761 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:20.755568 kubelet[1947]: E0213 20:23:20.755432 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:23:21.755875 kubelet[1947]: E0213 20:23:21.755750 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"