Apr 30 13:50:14.051288 kernel: Linux version 6.6.88-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.43 p3) 2.43.1) #1 SMP PREEMPT_DYNAMIC Tue Apr 29 22:26:36 -00 2025 Apr 30 13:50:14.051339 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:50:14.051355 kernel: BIOS-provided physical RAM map: Apr 30 13:50:14.051371 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Apr 30 13:50:14.051382 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Apr 30 13:50:14.051956 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Apr 30 13:50:14.051972 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdbfff] usable Apr 30 13:50:14.051984 kernel: BIOS-e820: [mem 0x000000007ffdc000-0x000000007fffffff] reserved Apr 30 13:50:14.051995 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Apr 30 13:50:14.052006 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Apr 30 13:50:14.052017 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Apr 30 13:50:14.052028 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Apr 30 13:50:14.052046 kernel: NX (Execute Disable) protection: active Apr 30 13:50:14.052057 kernel: APIC: Static calls initialized Apr 30 13:50:14.052071 kernel: SMBIOS 2.8 present. Apr 30 13:50:14.052083 kernel: DMI: Red Hat KVM/RHEL-AV, BIOS 1.13.0-2.module_el8.5.0+2608+72063365 04/01/2014 Apr 30 13:50:14.052096 kernel: Hypervisor detected: KVM Apr 30 13:50:14.052112 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Apr 30 13:50:14.052124 kernel: kvm-clock: using sched offset of 4782526139 cycles Apr 30 13:50:14.052137 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Apr 30 13:50:14.052149 kernel: tsc: Detected 2499.998 MHz processor Apr 30 13:50:14.052162 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Apr 30 13:50:14.052174 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Apr 30 13:50:14.052186 kernel: last_pfn = 0x7ffdc max_arch_pfn = 0x400000000 Apr 30 13:50:14.052199 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Apr 30 13:50:14.052211 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Apr 30 13:50:14.052228 kernel: Using GB pages for direct mapping Apr 30 13:50:14.052240 kernel: ACPI: Early table checksum verification disabled Apr 30 13:50:14.052457 kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS ) Apr 30 13:50:14.052472 kernel: ACPI: RSDT 0x000000007FFE47A5 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 13:50:14.052484 kernel: ACPI: FACP 0x000000007FFE438D 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 13:50:14.052497 kernel: ACPI: DSDT 0x000000007FFDFD80 00460D (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 13:50:14.052509 kernel: ACPI: FACS 0x000000007FFDFD40 000040 Apr 30 13:50:14.052521 kernel: ACPI: APIC 0x000000007FFE4481 0000F0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 13:50:14.052534 kernel: ACPI: SRAT 0x000000007FFE4571 0001D0 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 13:50:14.052553 kernel: ACPI: MCFG 0x000000007FFE4741 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 13:50:14.052566 kernel: ACPI: WAET 0x000000007FFE477D 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Apr 30 13:50:14.052578 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe438d-0x7ffe4480] Apr 30 13:50:14.052590 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffdfd80-0x7ffe438c] Apr 30 13:50:14.052602 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffdfd40-0x7ffdfd7f] Apr 30 13:50:14.052621 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe4481-0x7ffe4570] Apr 30 13:50:14.052634 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe4571-0x7ffe4740] Apr 30 13:50:14.052651 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe4741-0x7ffe477c] Apr 30 13:50:14.052664 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe477d-0x7ffe47a4] Apr 30 13:50:14.052677 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Apr 30 13:50:14.052689 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Apr 30 13:50:14.052702 kernel: SRAT: PXM 0 -> APIC 0x02 -> Node 0 Apr 30 13:50:14.052714 kernel: SRAT: PXM 0 -> APIC 0x03 -> Node 0 Apr 30 13:50:14.052739 kernel: SRAT: PXM 0 -> APIC 0x04 -> Node 0 Apr 30 13:50:14.052753 kernel: SRAT: PXM 0 -> APIC 0x05 -> Node 0 Apr 30 13:50:14.052771 kernel: SRAT: PXM 0 -> APIC 0x06 -> Node 0 Apr 30 13:50:14.052784 kernel: SRAT: PXM 0 -> APIC 0x07 -> Node 0 Apr 30 13:50:14.052796 kernel: SRAT: PXM 0 -> APIC 0x08 -> Node 0 Apr 30 13:50:14.052809 kernel: SRAT: PXM 0 -> APIC 0x09 -> Node 0 Apr 30 13:50:14.052821 kernel: SRAT: PXM 0 -> APIC 0x0a -> Node 0 Apr 30 13:50:14.052834 kernel: SRAT: PXM 0 -> APIC 0x0b -> Node 0 Apr 30 13:50:14.052846 kernel: SRAT: PXM 0 -> APIC 0x0c -> Node 0 Apr 30 13:50:14.052858 kernel: SRAT: PXM 0 -> APIC 0x0d -> Node 0 Apr 30 13:50:14.052871 kernel: SRAT: PXM 0 -> APIC 0x0e -> Node 0 Apr 30 13:50:14.052888 kernel: SRAT: PXM 0 -> APIC 0x0f -> Node 0 Apr 30 13:50:14.052901 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Apr 30 13:50:14.052913 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Apr 30 13:50:14.052926 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x20800fffff] hotplug Apr 30 13:50:14.052939 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdbfff] -> [mem 0x00000000-0x7ffdbfff] Apr 30 13:50:14.052952 kernel: NODE_DATA(0) allocated [mem 0x7ffd6000-0x7ffdbfff] Apr 30 13:50:14.052965 kernel: Zone ranges: Apr 30 13:50:14.052977 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Apr 30 13:50:14.052990 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdbfff] Apr 30 13:50:14.053002 kernel: Normal empty Apr 30 13:50:14.053020 kernel: Movable zone start for each node Apr 30 13:50:14.053033 kernel: Early memory node ranges Apr 30 13:50:14.053045 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Apr 30 13:50:14.053057 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdbfff] Apr 30 13:50:14.053070 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdbfff] Apr 30 13:50:14.053083 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Apr 30 13:50:14.053095 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Apr 30 13:50:14.053108 kernel: On node 0, zone DMA32: 36 pages in unavailable ranges Apr 30 13:50:14.053121 kernel: ACPI: PM-Timer IO Port: 0x608 Apr 30 13:50:14.053138 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Apr 30 13:50:14.053151 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Apr 30 13:50:14.053164 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Apr 30 13:50:14.053176 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Apr 30 13:50:14.053189 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Apr 30 13:50:14.053202 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Apr 30 13:50:14.053214 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Apr 30 13:50:14.053227 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Apr 30 13:50:14.053239 kernel: TSC deadline timer available Apr 30 13:50:14.053257 kernel: smpboot: Allowing 16 CPUs, 14 hotplug CPUs Apr 30 13:50:14.053269 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Apr 30 13:50:14.053282 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Apr 30 13:50:14.053295 kernel: Booting paravirtualized kernel on KVM Apr 30 13:50:14.053307 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Apr 30 13:50:14.053320 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:16 nr_cpu_ids:16 nr_node_ids:1 Apr 30 13:50:14.053333 kernel: percpu: Embedded 58 pages/cpu s197096 r8192 d32280 u262144 Apr 30 13:50:14.053345 kernel: pcpu-alloc: s197096 r8192 d32280 u262144 alloc=1*2097152 Apr 30 13:50:14.053358 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 [0] 08 09 10 11 12 13 14 15 Apr 30 13:50:14.053375 kernel: kvm-guest: PV spinlocks enabled Apr 30 13:50:14.053388 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Apr 30 13:50:14.055436 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:50:14.055452 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Apr 30 13:50:14.055465 kernel: random: crng init done Apr 30 13:50:14.055478 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Apr 30 13:50:14.055491 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Apr 30 13:50:14.055504 kernel: Fallback order for Node 0: 0 Apr 30 13:50:14.055525 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515804 Apr 30 13:50:14.055538 kernel: Policy zone: DMA32 Apr 30 13:50:14.055551 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Apr 30 13:50:14.055563 kernel: software IO TLB: area num 16. Apr 30 13:50:14.055577 kernel: Memory: 1899484K/2096616K available (14336K kernel code, 2295K rwdata, 22864K rodata, 43484K init, 1592K bss, 196872K reserved, 0K cma-reserved) Apr 30 13:50:14.055590 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=16, Nodes=1 Apr 30 13:50:14.055602 kernel: Kernel/User page tables isolation: enabled Apr 30 13:50:14.055615 kernel: ftrace: allocating 37918 entries in 149 pages Apr 30 13:50:14.055628 kernel: ftrace: allocated 149 pages with 4 groups Apr 30 13:50:14.055646 kernel: Dynamic Preempt: voluntary Apr 30 13:50:14.055659 kernel: rcu: Preemptible hierarchical RCU implementation. Apr 30 13:50:14.055672 kernel: rcu: RCU event tracing is enabled. Apr 30 13:50:14.055685 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=16. Apr 30 13:50:14.055699 kernel: Trampoline variant of Tasks RCU enabled. Apr 30 13:50:14.055736 kernel: Rude variant of Tasks RCU enabled. Apr 30 13:50:14.055757 kernel: Tracing variant of Tasks RCU enabled. Apr 30 13:50:14.055770 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Apr 30 13:50:14.055784 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=16 Apr 30 13:50:14.055797 kernel: NR_IRQS: 33024, nr_irqs: 552, preallocated irqs: 16 Apr 30 13:50:14.055810 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Apr 30 13:50:14.055823 kernel: Console: colour VGA+ 80x25 Apr 30 13:50:14.055842 kernel: printk: console [tty0] enabled Apr 30 13:50:14.055856 kernel: printk: console [ttyS0] enabled Apr 30 13:50:14.055869 kernel: ACPI: Core revision 20230628 Apr 30 13:50:14.055882 kernel: APIC: Switch to symmetric I/O mode setup Apr 30 13:50:14.055896 kernel: x2apic enabled Apr 30 13:50:14.055915 kernel: APIC: Switched APIC routing to: physical x2apic Apr 30 13:50:14.055928 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 30 13:50:14.055942 kernel: Calibrating delay loop (skipped) preset value.. 4999.99 BogoMIPS (lpj=2499998) Apr 30 13:50:14.055955 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Apr 30 13:50:14.055969 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Apr 30 13:50:14.055982 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Apr 30 13:50:14.055995 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Apr 30 13:50:14.056009 kernel: Spectre V2 : Mitigation: Retpolines Apr 30 13:50:14.056022 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Apr 30 13:50:14.056035 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Apr 30 13:50:14.056054 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Apr 30 13:50:14.056068 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Apr 30 13:50:14.056081 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Apr 30 13:50:14.056094 kernel: MDS: Mitigation: Clear CPU buffers Apr 30 13:50:14.056107 kernel: MMIO Stale Data: Unknown: No mitigations Apr 30 13:50:14.056120 kernel: SRBDS: Unknown: Dependent on hypervisor status Apr 30 13:50:14.056133 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Apr 30 13:50:14.056147 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Apr 30 13:50:14.056160 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Apr 30 13:50:14.056173 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Apr 30 13:50:14.056191 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Apr 30 13:50:14.056205 kernel: Freeing SMP alternatives memory: 32K Apr 30 13:50:14.056218 kernel: pid_max: default: 32768 minimum: 301 Apr 30 13:50:14.056231 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Apr 30 13:50:14.056244 kernel: landlock: Up and running. Apr 30 13:50:14.056257 kernel: SELinux: Initializing. Apr 30 13:50:14.056270 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 13:50:14.056283 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Apr 30 13:50:14.056297 kernel: smpboot: CPU0: Intel Xeon E3-12xx v2 (Ivy Bridge, IBRS) (family: 0x6, model: 0x3a, stepping: 0x9) Apr 30 13:50:14.056310 kernel: RCU Tasks: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:50:14.056323 kernel: RCU Tasks Rude: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:50:14.056342 kernel: RCU Tasks Trace: Setting shift to 4 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=16. Apr 30 13:50:14.056356 kernel: Performance Events: unsupported p6 CPU model 58 no PMU driver, software events only. Apr 30 13:50:14.056369 kernel: signal: max sigframe size: 1776 Apr 30 13:50:14.056382 kernel: rcu: Hierarchical SRCU implementation. Apr 30 13:50:14.056416 kernel: rcu: Max phase no-delay instances is 400. Apr 30 13:50:14.056430 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Apr 30 13:50:14.056444 kernel: smp: Bringing up secondary CPUs ... Apr 30 13:50:14.056457 kernel: smpboot: x86: Booting SMP configuration: Apr 30 13:50:14.056470 kernel: .... node #0, CPUs: #1 Apr 30 13:50:14.056490 kernel: smpboot: CPU 1 Converting physical 0 to logical die 1 Apr 30 13:50:14.056504 kernel: smp: Brought up 1 node, 2 CPUs Apr 30 13:50:14.056517 kernel: smpboot: Max logical packages: 16 Apr 30 13:50:14.056531 kernel: smpboot: Total of 2 processors activated (9999.99 BogoMIPS) Apr 30 13:50:14.056544 kernel: devtmpfs: initialized Apr 30 13:50:14.056557 kernel: x86/mm: Memory block size: 128MB Apr 30 13:50:14.056570 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Apr 30 13:50:14.056584 kernel: futex hash table entries: 4096 (order: 6, 262144 bytes, linear) Apr 30 13:50:14.056597 kernel: pinctrl core: initialized pinctrl subsystem Apr 30 13:50:14.056616 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Apr 30 13:50:14.056629 kernel: audit: initializing netlink subsys (disabled) Apr 30 13:50:14.056643 kernel: audit: type=2000 audit(1746021013.227:1): state=initialized audit_enabled=0 res=1 Apr 30 13:50:14.056656 kernel: thermal_sys: Registered thermal governor 'step_wise' Apr 30 13:50:14.056669 kernel: thermal_sys: Registered thermal governor 'user_space' Apr 30 13:50:14.056682 kernel: cpuidle: using governor menu Apr 30 13:50:14.056696 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Apr 30 13:50:14.056709 kernel: dca service started, version 1.12.1 Apr 30 13:50:14.056732 kernel: PCI: MMCONFIG for domain 0000 [bus 00-ff] at [mem 0xb0000000-0xbfffffff] (base 0xb0000000) Apr 30 13:50:14.056753 kernel: PCI: MMCONFIG at [mem 0xb0000000-0xbfffffff] reserved as E820 entry Apr 30 13:50:14.056767 kernel: PCI: Using configuration type 1 for base access Apr 30 13:50:14.056780 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Apr 30 13:50:14.056794 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Apr 30 13:50:14.056807 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Apr 30 13:50:14.056820 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Apr 30 13:50:14.056834 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Apr 30 13:50:14.056847 kernel: ACPI: Added _OSI(Module Device) Apr 30 13:50:14.056860 kernel: ACPI: Added _OSI(Processor Device) Apr 30 13:50:14.056879 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Apr 30 13:50:14.056893 kernel: ACPI: Added _OSI(Processor Aggregator Device) Apr 30 13:50:14.056906 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Apr 30 13:50:14.056919 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Apr 30 13:50:14.056933 kernel: ACPI: Interpreter enabled Apr 30 13:50:14.056946 kernel: ACPI: PM: (supports S0 S5) Apr 30 13:50:14.056959 kernel: ACPI: Using IOAPIC for interrupt routing Apr 30 13:50:14.056973 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Apr 30 13:50:14.056986 kernel: PCI: Using E820 reservations for host bridge windows Apr 30 13:50:14.057005 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Apr 30 13:50:14.057019 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Apr 30 13:50:14.057302 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Apr 30 13:50:14.057502 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Apr 30 13:50:14.057674 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Apr 30 13:50:14.057695 kernel: PCI host bridge to bus 0000:00 Apr 30 13:50:14.057902 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Apr 30 13:50:14.058072 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Apr 30 13:50:14.058231 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Apr 30 13:50:14.058387 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Apr 30 13:50:14.060611 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Apr 30 13:50:14.060794 kernel: pci_bus 0000:00: root bus resource [mem 0x20c0000000-0x28bfffffff window] Apr 30 13:50:14.060957 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Apr 30 13:50:14.061170 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 Apr 30 13:50:14.061371 kernel: pci 0000:00:01.0: [1013:00b8] type 00 class 0x030000 Apr 30 13:50:14.061592 kernel: pci 0000:00:01.0: reg 0x10: [mem 0xfa000000-0xfbffffff pref] Apr 30 13:50:14.061776 kernel: pci 0000:00:01.0: reg 0x14: [mem 0xfea50000-0xfea50fff] Apr 30 13:50:14.061946 kernel: pci 0000:00:01.0: reg 0x30: [mem 0xfea40000-0xfea4ffff pref] Apr 30 13:50:14.062113 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Apr 30 13:50:14.062293 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 Apr 30 13:50:14.062491 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfea51000-0xfea51fff] Apr 30 13:50:14.062673 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 Apr 30 13:50:14.062862 kernel: pci 0000:00:02.1: reg 0x10: [mem 0xfea52000-0xfea52fff] Apr 30 13:50:14.063048 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 Apr 30 13:50:14.063223 kernel: pci 0000:00:02.2: reg 0x10: [mem 0xfea53000-0xfea53fff] Apr 30 13:50:14.063425 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 Apr 30 13:50:14.063604 kernel: pci 0000:00:02.3: reg 0x10: [mem 0xfea54000-0xfea54fff] Apr 30 13:50:14.063849 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 Apr 30 13:50:14.064029 kernel: pci 0000:00:02.4: reg 0x10: [mem 0xfea55000-0xfea55fff] Apr 30 13:50:14.064215 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 Apr 30 13:50:14.064408 kernel: pci 0000:00:02.5: reg 0x10: [mem 0xfea56000-0xfea56fff] Apr 30 13:50:14.064598 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 Apr 30 13:50:14.064797 kernel: pci 0000:00:02.6: reg 0x10: [mem 0xfea57000-0xfea57fff] Apr 30 13:50:14.064983 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 Apr 30 13:50:14.065157 kernel: pci 0000:00:02.7: reg 0x10: [mem 0xfea58000-0xfea58fff] Apr 30 13:50:14.065340 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Apr 30 13:50:14.065547 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc0c0-0xc0df] Apr 30 13:50:14.065718 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfea59000-0xfea59fff] Apr 30 13:50:14.065900 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfd000000-0xfd003fff 64bit pref] Apr 30 13:50:14.066077 kernel: pci 0000:00:03.0: reg 0x30: [mem 0xfea00000-0xfea3ffff pref] Apr 30 13:50:14.066257 kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 Apr 30 13:50:14.066448 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc07f] Apr 30 13:50:14.066623 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfea5a000-0xfea5afff] Apr 30 13:50:14.066810 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfd004000-0xfd007fff 64bit pref] Apr 30 13:50:14.066994 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 Apr 30 13:50:14.067169 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Apr 30 13:50:14.067362 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 Apr 30 13:50:14.067561 kernel: pci 0000:00:1f.2: reg 0x20: [io 0xc0e0-0xc0ff] Apr 30 13:50:14.067747 kernel: pci 0000:00:1f.2: reg 0x24: [mem 0xfea5b000-0xfea5bfff] Apr 30 13:50:14.067935 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 Apr 30 13:50:14.068110 kernel: pci 0000:00:1f.3: reg 0x20: [io 0x0700-0x073f] Apr 30 13:50:14.068309 kernel: pci 0000:01:00.0: [1b36:000e] type 01 class 0x060400 Apr 30 13:50:14.068591 kernel: pci 0000:01:00.0: reg 0x10: [mem 0xfda00000-0xfda000ff 64bit] Apr 30 13:50:14.068781 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 30 13:50:14.068951 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 30 13:50:14.069118 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 13:50:14.069309 kernel: pci_bus 0000:02: extended config space not accessible Apr 30 13:50:14.069520 kernel: pci 0000:02:01.0: [8086:25ab] type 00 class 0x088000 Apr 30 13:50:14.069713 kernel: pci 0000:02:01.0: reg 0x10: [mem 0xfd800000-0xfd80000f] Apr 30 13:50:14.069907 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 30 13:50:14.070080 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 13:50:14.070264 kernel: pci 0000:03:00.0: [1b36:000d] type 00 class 0x0c0330 Apr 30 13:50:14.070484 kernel: pci 0000:03:00.0: reg 0x10: [mem 0xfe800000-0xfe803fff 64bit] Apr 30 13:50:14.070656 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 30 13:50:14.070840 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 13:50:14.071010 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 13:50:14.071209 kernel: pci 0000:04:00.0: [1af4:1044] type 00 class 0x00ff00 Apr 30 13:50:14.071386 kernel: pci 0000:04:00.0: reg 0x20: [mem 0xfca00000-0xfca03fff 64bit pref] Apr 30 13:50:14.071576 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 30 13:50:14.071760 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 13:50:14.071931 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 13:50:14.072103 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 30 13:50:14.072272 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 13:50:14.072467 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 13:50:14.072676 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 30 13:50:14.072864 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 13:50:14.073035 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 13:50:14.073209 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 30 13:50:14.073379 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 13:50:14.073579 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 13:50:14.073780 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 30 13:50:14.074029 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 13:50:14.075603 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 13:50:14.075796 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 30 13:50:14.075967 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 13:50:14.076135 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 13:50:14.076156 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Apr 30 13:50:14.076170 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Apr 30 13:50:14.076184 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Apr 30 13:50:14.076198 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Apr 30 13:50:14.076221 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Apr 30 13:50:14.076235 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Apr 30 13:50:14.076249 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Apr 30 13:50:14.076262 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Apr 30 13:50:14.076276 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Apr 30 13:50:14.076290 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Apr 30 13:50:14.076303 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Apr 30 13:50:14.076317 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Apr 30 13:50:14.076336 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Apr 30 13:50:14.076350 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Apr 30 13:50:14.076363 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Apr 30 13:50:14.076377 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Apr 30 13:50:14.076403 kernel: iommu: Default domain type: Translated Apr 30 13:50:14.076427 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Apr 30 13:50:14.076443 kernel: PCI: Using ACPI for IRQ routing Apr 30 13:50:14.076457 kernel: PCI: pci_cache_line_size set to 64 bytes Apr 30 13:50:14.076471 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Apr 30 13:50:14.076484 kernel: e820: reserve RAM buffer [mem 0x7ffdc000-0x7fffffff] Apr 30 13:50:14.076660 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Apr 30 13:50:14.076865 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Apr 30 13:50:14.077845 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Apr 30 13:50:14.077874 kernel: vgaarb: loaded Apr 30 13:50:14.077890 kernel: clocksource: Switched to clocksource kvm-clock Apr 30 13:50:14.077904 kernel: VFS: Disk quotas dquot_6.6.0 Apr 30 13:50:14.077918 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Apr 30 13:50:14.077932 kernel: pnp: PnP ACPI init Apr 30 13:50:14.078133 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Apr 30 13:50:14.078156 kernel: pnp: PnP ACPI: found 5 devices Apr 30 13:50:14.078171 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Apr 30 13:50:14.078185 kernel: NET: Registered PF_INET protocol family Apr 30 13:50:14.078199 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Apr 30 13:50:14.078213 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Apr 30 13:50:14.078226 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Apr 30 13:50:14.078240 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Apr 30 13:50:14.078261 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Apr 30 13:50:14.078275 kernel: TCP: Hash tables configured (established 16384 bind 16384) Apr 30 13:50:14.078289 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 13:50:14.078303 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Apr 30 13:50:14.078317 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Apr 30 13:50:14.078330 kernel: NET: Registered PF_XDP protocol family Apr 30 13:50:14.079975 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01-02] add_size 1000 Apr 30 13:50:14.080161 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Apr 30 13:50:14.080351 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Apr 30 13:50:14.080559 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Apr 30 13:50:14.080752 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Apr 30 13:50:14.080928 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Apr 30 13:50:14.081100 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Apr 30 13:50:14.081284 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Apr 30 13:50:14.082560 kernel: pci 0000:00:02.0: BAR 13: assigned [io 0x1000-0x1fff] Apr 30 13:50:14.082756 kernel: pci 0000:00:02.1: BAR 13: assigned [io 0x2000-0x2fff] Apr 30 13:50:14.082931 kernel: pci 0000:00:02.2: BAR 13: assigned [io 0x3000-0x3fff] Apr 30 13:50:14.083102 kernel: pci 0000:00:02.3: BAR 13: assigned [io 0x4000-0x4fff] Apr 30 13:50:14.083273 kernel: pci 0000:00:02.4: BAR 13: assigned [io 0x5000-0x5fff] Apr 30 13:50:14.084488 kernel: pci 0000:00:02.5: BAR 13: assigned [io 0x6000-0x6fff] Apr 30 13:50:14.084667 kernel: pci 0000:00:02.6: BAR 13: assigned [io 0x7000-0x7fff] Apr 30 13:50:14.084859 kernel: pci 0000:00:02.7: BAR 13: assigned [io 0x8000-0x8fff] Apr 30 13:50:14.085072 kernel: pci 0000:01:00.0: PCI bridge to [bus 02] Apr 30 13:50:14.085258 kernel: pci 0000:01:00.0: bridge window [mem 0xfd800000-0xfd9fffff] Apr 30 13:50:14.086503 kernel: pci 0000:00:02.0: PCI bridge to [bus 01-02] Apr 30 13:50:14.086686 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Apr 30 13:50:14.086872 kernel: pci 0000:00:02.0: bridge window [mem 0xfd800000-0xfdbfffff] Apr 30 13:50:14.087045 kernel: pci 0000:00:02.0: bridge window [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 13:50:14.087215 kernel: pci 0000:00:02.1: PCI bridge to [bus 03] Apr 30 13:50:14.087385 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Apr 30 13:50:14.089614 kernel: pci 0000:00:02.1: bridge window [mem 0xfe800000-0xfe9fffff] Apr 30 13:50:14.089812 kernel: pci 0000:00:02.1: bridge window [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 13:50:14.089990 kernel: pci 0000:00:02.2: PCI bridge to [bus 04] Apr 30 13:50:14.090165 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Apr 30 13:50:14.090340 kernel: pci 0000:00:02.2: bridge window [mem 0xfe600000-0xfe7fffff] Apr 30 13:50:14.090564 kernel: pci 0000:00:02.2: bridge window [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 13:50:14.090762 kernel: pci 0000:00:02.3: PCI bridge to [bus 05] Apr 30 13:50:14.090938 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Apr 30 13:50:14.091107 kernel: pci 0000:00:02.3: bridge window [mem 0xfe400000-0xfe5fffff] Apr 30 13:50:14.091277 kernel: pci 0000:00:02.3: bridge window [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 13:50:14.091467 kernel: pci 0000:00:02.4: PCI bridge to [bus 06] Apr 30 13:50:14.091639 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Apr 30 13:50:14.091823 kernel: pci 0000:00:02.4: bridge window [mem 0xfe200000-0xfe3fffff] Apr 30 13:50:14.091997 kernel: pci 0000:00:02.4: bridge window [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 13:50:14.092168 kernel: pci 0000:00:02.5: PCI bridge to [bus 07] Apr 30 13:50:14.092348 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Apr 30 13:50:14.094573 kernel: pci 0000:00:02.5: bridge window [mem 0xfe000000-0xfe1fffff] Apr 30 13:50:14.094771 kernel: pci 0000:00:02.5: bridge window [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 13:50:14.094950 kernel: pci 0000:00:02.6: PCI bridge to [bus 08] Apr 30 13:50:14.095124 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Apr 30 13:50:14.095304 kernel: pci 0000:00:02.6: bridge window [mem 0xfde00000-0xfdffffff] Apr 30 13:50:14.097530 kernel: pci 0000:00:02.6: bridge window [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 13:50:14.097716 kernel: pci 0000:00:02.7: PCI bridge to [bus 09] Apr 30 13:50:14.097906 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Apr 30 13:50:14.098078 kernel: pci 0000:00:02.7: bridge window [mem 0xfdc00000-0xfddfffff] Apr 30 13:50:14.098252 kernel: pci 0000:00:02.7: bridge window [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 13:50:14.098457 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Apr 30 13:50:14.098616 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Apr 30 13:50:14.098784 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Apr 30 13:50:14.098951 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Apr 30 13:50:14.099105 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Apr 30 13:50:14.099260 kernel: pci_bus 0000:00: resource 9 [mem 0x20c0000000-0x28bfffffff window] Apr 30 13:50:14.099453 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Apr 30 13:50:14.099621 kernel: pci_bus 0000:01: resource 1 [mem 0xfd800000-0xfdbfffff] Apr 30 13:50:14.099799 kernel: pci_bus 0000:01: resource 2 [mem 0xfce00000-0xfcffffff 64bit pref] Apr 30 13:50:14.099975 kernel: pci_bus 0000:02: resource 1 [mem 0xfd800000-0xfd9fffff] Apr 30 13:50:14.100159 kernel: pci_bus 0000:03: resource 0 [io 0x2000-0x2fff] Apr 30 13:50:14.100337 kernel: pci_bus 0000:03: resource 1 [mem 0xfe800000-0xfe9fffff] Apr 30 13:50:14.102549 kernel: pci_bus 0000:03: resource 2 [mem 0xfcc00000-0xfcdfffff 64bit pref] Apr 30 13:50:14.102748 kernel: pci_bus 0000:04: resource 0 [io 0x3000-0x3fff] Apr 30 13:50:14.102917 kernel: pci_bus 0000:04: resource 1 [mem 0xfe600000-0xfe7fffff] Apr 30 13:50:14.103079 kernel: pci_bus 0000:04: resource 2 [mem 0xfca00000-0xfcbfffff 64bit pref] Apr 30 13:50:14.103263 kernel: pci_bus 0000:05: resource 0 [io 0x4000-0x4fff] Apr 30 13:50:14.108462 kernel: pci_bus 0000:05: resource 1 [mem 0xfe400000-0xfe5fffff] Apr 30 13:50:14.108635 kernel: pci_bus 0000:05: resource 2 [mem 0xfc800000-0xfc9fffff 64bit pref] Apr 30 13:50:14.108824 kernel: pci_bus 0000:06: resource 0 [io 0x5000-0x5fff] Apr 30 13:50:14.108988 kernel: pci_bus 0000:06: resource 1 [mem 0xfe200000-0xfe3fffff] Apr 30 13:50:14.109148 kernel: pci_bus 0000:06: resource 2 [mem 0xfc600000-0xfc7fffff 64bit pref] Apr 30 13:50:14.109320 kernel: pci_bus 0000:07: resource 0 [io 0x6000-0x6fff] Apr 30 13:50:14.109522 kernel: pci_bus 0000:07: resource 1 [mem 0xfe000000-0xfe1fffff] Apr 30 13:50:14.109685 kernel: pci_bus 0000:07: resource 2 [mem 0xfc400000-0xfc5fffff 64bit pref] Apr 30 13:50:14.109869 kernel: pci_bus 0000:08: resource 0 [io 0x7000-0x7fff] Apr 30 13:50:14.110031 kernel: pci_bus 0000:08: resource 1 [mem 0xfde00000-0xfdffffff] Apr 30 13:50:14.110191 kernel: pci_bus 0000:08: resource 2 [mem 0xfc200000-0xfc3fffff 64bit pref] Apr 30 13:50:14.110372 kernel: pci_bus 0000:09: resource 0 [io 0x8000-0x8fff] Apr 30 13:50:14.110555 kernel: pci_bus 0000:09: resource 1 [mem 0xfdc00000-0xfddfffff] Apr 30 13:50:14.110738 kernel: pci_bus 0000:09: resource 2 [mem 0xfc000000-0xfc1fffff 64bit pref] Apr 30 13:50:14.110761 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Apr 30 13:50:14.110783 kernel: PCI: CLS 0 bytes, default 64 Apr 30 13:50:14.110798 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Apr 30 13:50:14.110812 kernel: software IO TLB: mapped [mem 0x0000000079800000-0x000000007d800000] (64MB) Apr 30 13:50:14.110827 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Apr 30 13:50:14.110841 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x240937b9988, max_idle_ns: 440795218083 ns Apr 30 13:50:14.110856 kernel: Initialise system trusted keyrings Apr 30 13:50:14.110875 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Apr 30 13:50:14.110890 kernel: Key type asymmetric registered Apr 30 13:50:14.110904 kernel: Asymmetric key parser 'x509' registered Apr 30 13:50:14.110918 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Apr 30 13:50:14.110932 kernel: io scheduler mq-deadline registered Apr 30 13:50:14.110946 kernel: io scheduler kyber registered Apr 30 13:50:14.110961 kernel: io scheduler bfq registered Apr 30 13:50:14.111138 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 24 Apr 30 13:50:14.111315 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 24 Apr 30 13:50:14.111538 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 13:50:14.111739 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 25 Apr 30 13:50:14.111927 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 25 Apr 30 13:50:14.112112 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 13:50:14.112308 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 26 Apr 30 13:50:14.113559 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 26 Apr 30 13:50:14.113759 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 13:50:14.113935 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 27 Apr 30 13:50:14.114105 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 27 Apr 30 13:50:14.114288 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 13:50:14.117738 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 28 Apr 30 13:50:14.117923 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 28 Apr 30 13:50:14.118106 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 13:50:14.118280 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 29 Apr 30 13:50:14.118480 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 29 Apr 30 13:50:14.118652 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 13:50:14.118842 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 30 Apr 30 13:50:14.119013 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 30 Apr 30 13:50:14.119192 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 13:50:14.119363 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 31 Apr 30 13:50:14.119564 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 31 Apr 30 13:50:14.119749 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Apr 30 13:50:14.119772 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Apr 30 13:50:14.119788 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Apr 30 13:50:14.119810 kernel: ACPI: \_SB_.GSIE: Enabled at IRQ 20 Apr 30 13:50:14.119825 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Apr 30 13:50:14.119839 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Apr 30 13:50:14.119854 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Apr 30 13:50:14.119868 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Apr 30 13:50:14.119882 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Apr 30 13:50:14.120068 kernel: rtc_cmos 00:03: RTC can wake from S4 Apr 30 13:50:14.120092 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Apr 30 13:50:14.120249 kernel: rtc_cmos 00:03: registered as rtc0 Apr 30 13:50:14.120446 kernel: rtc_cmos 00:03: setting system clock to 2025-04-30T13:50:13 UTC (1746021013) Apr 30 13:50:14.120607 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Apr 30 13:50:14.120628 kernel: intel_pstate: CPU model not supported Apr 30 13:50:14.120643 kernel: NET: Registered PF_INET6 protocol family Apr 30 13:50:14.120657 kernel: Segment Routing with IPv6 Apr 30 13:50:14.120671 kernel: In-situ OAM (IOAM) with IPv6 Apr 30 13:50:14.120685 kernel: NET: Registered PF_PACKET protocol family Apr 30 13:50:14.120700 kernel: Key type dns_resolver registered Apr 30 13:50:14.120731 kernel: IPI shorthand broadcast: enabled Apr 30 13:50:14.120748 kernel: sched_clock: Marking stable (1140003749, 232549492)->(1613638255, -241085014) Apr 30 13:50:14.120763 kernel: registered taskstats version 1 Apr 30 13:50:14.120782 kernel: Loading compiled-in X.509 certificates Apr 30 13:50:14.120797 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.88-flatcar: 10d2d341d26c1df942e743344427c053ef3a2a5f' Apr 30 13:50:14.120811 kernel: Key type .fscrypt registered Apr 30 13:50:14.120825 kernel: Key type fscrypt-provisioning registered Apr 30 13:50:14.120839 kernel: ima: No TPM chip found, activating TPM-bypass! Apr 30 13:50:14.120853 kernel: ima: Allocated hash algorithm: sha1 Apr 30 13:50:14.120872 kernel: ima: No architecture policies found Apr 30 13:50:14.120886 kernel: clk: Disabling unused clocks Apr 30 13:50:14.120901 kernel: Freeing unused kernel image (initmem) memory: 43484K Apr 30 13:50:14.120915 kernel: Write protecting the kernel read-only data: 38912k Apr 30 13:50:14.120929 kernel: Freeing unused kernel image (rodata/data gap) memory: 1712K Apr 30 13:50:14.120943 kernel: Run /init as init process Apr 30 13:50:14.120957 kernel: with arguments: Apr 30 13:50:14.120972 kernel: /init Apr 30 13:50:14.120986 kernel: with environment: Apr 30 13:50:14.121004 kernel: HOME=/ Apr 30 13:50:14.121018 kernel: TERM=linux Apr 30 13:50:14.121033 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Apr 30 13:50:14.121055 systemd[1]: Successfully made /usr/ read-only. Apr 30 13:50:14.121076 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 13:50:14.121091 systemd[1]: Detected virtualization kvm. Apr 30 13:50:14.121107 systemd[1]: Detected architecture x86-64. Apr 30 13:50:14.121122 systemd[1]: Running in initrd. Apr 30 13:50:14.121143 systemd[1]: No hostname configured, using default hostname. Apr 30 13:50:14.121158 systemd[1]: Hostname set to . Apr 30 13:50:14.121174 systemd[1]: Initializing machine ID from VM UUID. Apr 30 13:50:14.121188 systemd[1]: Queued start job for default target initrd.target. Apr 30 13:50:14.121203 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:50:14.121218 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:50:14.121234 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Apr 30 13:50:14.121250 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 13:50:14.121270 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Apr 30 13:50:14.121287 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Apr 30 13:50:14.121303 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Apr 30 13:50:14.121319 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Apr 30 13:50:14.121334 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:50:14.121350 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:50:14.121370 systemd[1]: Reached target paths.target - Path Units. Apr 30 13:50:14.121385 systemd[1]: Reached target slices.target - Slice Units. Apr 30 13:50:14.126287 systemd[1]: Reached target swap.target - Swaps. Apr 30 13:50:14.126304 systemd[1]: Reached target timers.target - Timer Units. Apr 30 13:50:14.126320 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 13:50:14.126336 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 13:50:14.126351 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Apr 30 13:50:14.126366 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Apr 30 13:50:14.126382 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:50:14.126418 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 13:50:14.126434 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:50:14.126450 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 13:50:14.126465 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Apr 30 13:50:14.126480 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 13:50:14.126496 systemd[1]: Finished network-cleanup.service - Network Cleanup. Apr 30 13:50:14.126511 systemd[1]: Starting systemd-fsck-usr.service... Apr 30 13:50:14.126527 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 13:50:14.126542 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 13:50:14.126562 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:50:14.126578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Apr 30 13:50:14.126657 systemd-journald[202]: Collecting audit messages is disabled. Apr 30 13:50:14.126694 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:50:14.126717 systemd[1]: Finished systemd-fsck-usr.service. Apr 30 13:50:14.126745 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Apr 30 13:50:14.126762 systemd-journald[202]: Journal started Apr 30 13:50:14.126801 systemd-journald[202]: Runtime Journal (/run/log/journal/7086ba509ae24c08a3b847b92d444e8f) is 4.7M, max 37.9M, 33.2M free. Apr 30 13:50:14.074357 systemd-modules-load[203]: Inserted module 'overlay' Apr 30 13:50:14.143799 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Apr 30 13:50:14.143833 kernel: Bridge firewalling registered Apr 30 13:50:14.131119 systemd-modules-load[203]: Inserted module 'br_netfilter' Apr 30 13:50:14.153989 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 13:50:14.154083 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 13:50:14.156118 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:50:14.157158 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Apr 30 13:50:14.171668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:50:14.173576 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:50:14.177587 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 13:50:14.189546 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 13:50:14.197226 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:50:14.202632 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:50:14.210143 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:50:14.212329 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:50:14.218658 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Apr 30 13:50:14.225644 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 13:50:14.240879 dracut-cmdline[236]: dracut-dracut-053 Apr 30 13:50:14.244887 dracut-cmdline[236]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=openstack flatcar.autologin verity.usrhash=95dd3de5eb34971546a976dc51c66bc73cf59b888896e27767c0cbf245cb98fe Apr 30 13:50:14.275082 systemd-resolved[238]: Positive Trust Anchors: Apr 30 13:50:14.275113 systemd-resolved[238]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 13:50:14.275158 systemd-resolved[238]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 13:50:14.284332 systemd-resolved[238]: Defaulting to hostname 'linux'. Apr 30 13:50:14.287679 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 13:50:14.288476 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:50:14.348454 kernel: SCSI subsystem initialized Apr 30 13:50:14.359420 kernel: Loading iSCSI transport class v2.0-870. Apr 30 13:50:14.372463 kernel: iscsi: registered transport (tcp) Apr 30 13:50:14.397773 kernel: iscsi: registered transport (qla4xxx) Apr 30 13:50:14.397856 kernel: QLogic iSCSI HBA Driver Apr 30 13:50:14.456912 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Apr 30 13:50:14.464589 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Apr 30 13:50:14.495642 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Apr 30 13:50:14.495741 kernel: device-mapper: uevent: version 1.0.3 Apr 30 13:50:14.498744 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Apr 30 13:50:14.547458 kernel: raid6: sse2x4 gen() 7877 MB/s Apr 30 13:50:14.564435 kernel: raid6: sse2x2 gen() 5416 MB/s Apr 30 13:50:14.583024 kernel: raid6: sse2x1 gen() 5392 MB/s Apr 30 13:50:14.583101 kernel: raid6: using algorithm sse2x4 gen() 7877 MB/s Apr 30 13:50:14.602092 kernel: raid6: .... xor() 5020 MB/s, rmw enabled Apr 30 13:50:14.602158 kernel: raid6: using ssse3x2 recovery algorithm Apr 30 13:50:14.627426 kernel: xor: automatically using best checksumming function avx Apr 30 13:50:14.798441 kernel: Btrfs loaded, zoned=no, fsverity=no Apr 30 13:50:14.813213 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Apr 30 13:50:14.820615 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:50:14.841905 systemd-udevd[422]: Using default interface naming scheme 'v255'. Apr 30 13:50:14.850819 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:50:14.860604 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Apr 30 13:50:14.886182 dracut-pre-trigger[430]: rd.md=0: removing MD RAID activation Apr 30 13:50:14.929864 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 13:50:14.943748 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 13:50:15.063673 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:50:15.072629 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Apr 30 13:50:15.109205 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Apr 30 13:50:15.112730 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 13:50:15.113524 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:50:15.114231 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 13:50:15.126681 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Apr 30 13:50:15.156872 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Apr 30 13:50:15.226600 kernel: virtio_blk virtio1: 2/0/0 default/read/poll queues Apr 30 13:50:15.334717 kernel: cryptd: max_cpu_qlen set to 1000 Apr 30 13:50:15.334767 kernel: virtio_blk virtio1: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Apr 30 13:50:15.334993 kernel: AVX version of gcm_enc/dec engaged. Apr 30 13:50:15.335017 kernel: AES CTR mode by8 optimization enabled Apr 30 13:50:15.335037 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Apr 30 13:50:15.335056 kernel: GPT:17805311 != 125829119 Apr 30 13:50:15.335074 kernel: GPT:Alternate GPT header not at the end of the disk. Apr 30 13:50:15.335092 kernel: GPT:17805311 != 125829119 Apr 30 13:50:15.335110 kernel: GPT: Use GNU Parted to correct GPT errors. Apr 30 13:50:15.335128 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 13:50:15.335147 kernel: libata version 3.00 loaded. Apr 30 13:50:15.335171 kernel: ACPI: bus type USB registered Apr 30 13:50:15.335191 kernel: usbcore: registered new interface driver usbfs Apr 30 13:50:15.335210 kernel: usbcore: registered new interface driver hub Apr 30 13:50:15.335228 kernel: usbcore: registered new device driver usb Apr 30 13:50:15.277358 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 13:50:15.493892 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 30 13:50:15.494178 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 1 Apr 30 13:50:15.494433 kernel: xhci_hcd 0000:03:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Apr 30 13:50:15.494675 kernel: xhci_hcd 0000:03:00.0: xHCI Host Controller Apr 30 13:50:15.494914 kernel: xhci_hcd 0000:03:00.0: new USB bus registered, assigned bus number 2 Apr 30 13:50:15.495125 kernel: xhci_hcd 0000:03:00.0: Host supports USB 3.0 SuperSpeed Apr 30 13:50:15.495334 kernel: hub 1-0:1.0: USB hub found Apr 30 13:50:15.495621 kernel: hub 1-0:1.0: 4 ports detected Apr 30 13:50:15.495864 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Apr 30 13:50:15.496175 kernel: hub 2-0:1.0: USB hub found Apr 30 13:50:15.496447 kernel: hub 2-0:1.0: 4 ports detected Apr 30 13:50:15.496676 kernel: ahci 0000:00:1f.2: version 3.0 Apr 30 13:50:15.496902 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Apr 30 13:50:15.496926 kernel: ahci 0000:00:1f.2: AHCI 0001.0000 32 slots 6 ports 1.5 Gbps 0x3f impl SATA mode Apr 30 13:50:15.497126 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Apr 30 13:50:15.497327 kernel: scsi host0: ahci Apr 30 13:50:15.497574 kernel: scsi host1: ahci Apr 30 13:50:15.497807 kernel: scsi host2: ahci Apr 30 13:50:15.498006 kernel: scsi host3: ahci Apr 30 13:50:15.498200 kernel: scsi host4: ahci Apr 30 13:50:15.498424 kernel: scsi host5: ahci Apr 30 13:50:15.498628 kernel: ata1: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b100 irq 41 Apr 30 13:50:15.498659 kernel: ata2: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b180 irq 41 Apr 30 13:50:15.498680 kernel: ata3: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b200 irq 41 Apr 30 13:50:15.498712 kernel: ata4: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b280 irq 41 Apr 30 13:50:15.498733 kernel: ata5: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b300 irq 41 Apr 30 13:50:15.498753 kernel: ata6: SATA max UDMA/133 abar m4096@0xfea5b000 port 0xfea5b380 irq 41 Apr 30 13:50:15.498772 kernel: BTRFS: device fsid 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (477) Apr 30 13:50:15.498791 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (485) Apr 30 13:50:15.277580 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:50:15.278755 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:50:15.279476 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 13:50:15.279746 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:50:15.281880 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:50:15.297367 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:50:15.298864 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:50:15.440481 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Apr 30 13:50:15.502067 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:50:15.524650 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Apr 30 13:50:15.535472 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Apr 30 13:50:15.536269 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Apr 30 13:50:15.550428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 13:50:15.559622 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Apr 30 13:50:15.563572 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Apr 30 13:50:15.567849 disk-uuid[570]: Primary Header is updated. Apr 30 13:50:15.567849 disk-uuid[570]: Secondary Entries is updated. Apr 30 13:50:15.567849 disk-uuid[570]: Secondary Header is updated. Apr 30 13:50:15.576427 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 13:50:15.582439 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Apr 30 13:50:15.598122 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:50:15.665415 kernel: ata2: SATA link down (SStatus 0 SControl 300) Apr 30 13:50:15.665496 kernel: ata4: SATA link down (SStatus 0 SControl 300) Apr 30 13:50:15.666730 kernel: ata6: SATA link down (SStatus 0 SControl 300) Apr 30 13:50:15.670409 kernel: ata1: SATA link down (SStatus 0 SControl 300) Apr 30 13:50:15.670445 kernel: ata5: SATA link down (SStatus 0 SControl 300) Apr 30 13:50:15.671618 kernel: ata3: SATA link down (SStatus 0 SControl 300) Apr 30 13:50:15.772441 kernel: hid: raw HID events driver (C) Jiri Kosina Apr 30 13:50:15.779615 kernel: usbcore: registered new interface driver usbhid Apr 30 13:50:15.779655 kernel: usbhid: USB HID core driver Apr 30 13:50:15.789960 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:03:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input2 Apr 30 13:50:15.790036 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:03:00.0-1/input0 Apr 30 13:50:16.590510 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Apr 30 13:50:16.592197 disk-uuid[571]: The operation has completed successfully. Apr 30 13:50:16.670877 systemd[1]: disk-uuid.service: Deactivated successfully. Apr 30 13:50:16.671054 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Apr 30 13:50:16.711640 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Apr 30 13:50:16.718190 sh[592]: Success Apr 30 13:50:16.734442 kernel: device-mapper: verity: sha256 using implementation "sha256-avx" Apr 30 13:50:16.807755 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Apr 30 13:50:16.809759 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Apr 30 13:50:16.815527 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Apr 30 13:50:16.837508 kernel: BTRFS info (device dm-0): first mount of filesystem 0778af4c-f6f8-4118-a0d2-fb24d73f5df4 Apr 30 13:50:16.837569 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:50:16.837591 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Apr 30 13:50:16.839887 kernel: BTRFS info (device dm-0): disabling log replay at mount time Apr 30 13:50:16.841574 kernel: BTRFS info (device dm-0): using free space tree Apr 30 13:50:16.852778 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Apr 30 13:50:16.854188 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Apr 30 13:50:16.869630 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Apr 30 13:50:16.873579 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Apr 30 13:50:16.896028 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:50:16.896083 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:50:16.897747 kernel: BTRFS info (device vda6): using free space tree Apr 30 13:50:16.904830 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 13:50:16.911435 kernel: BTRFS info (device vda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:50:16.914662 systemd[1]: Finished ignition-setup.service - Ignition (setup). Apr 30 13:50:16.918591 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Apr 30 13:50:17.033776 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 13:50:17.049758 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 13:50:17.065559 ignition[678]: Ignition 2.20.0 Apr 30 13:50:17.065581 ignition[678]: Stage: fetch-offline Apr 30 13:50:17.065695 ignition[678]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:50:17.065716 ignition[678]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 13:50:17.069626 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 13:50:17.065908 ignition[678]: parsed url from cmdline: "" Apr 30 13:50:17.065915 ignition[678]: no config URL provided Apr 30 13:50:17.065925 ignition[678]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 13:50:17.065942 ignition[678]: no config at "/usr/lib/ignition/user.ign" Apr 30 13:50:17.065957 ignition[678]: failed to fetch config: resource requires networking Apr 30 13:50:17.066212 ignition[678]: Ignition finished successfully Apr 30 13:50:17.089644 systemd-networkd[776]: lo: Link UP Apr 30 13:50:17.089663 systemd-networkd[776]: lo: Gained carrier Apr 30 13:50:17.092115 systemd-networkd[776]: Enumeration completed Apr 30 13:50:17.092728 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 13:50:17.092740 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 13:50:17.092747 systemd-networkd[776]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:50:17.093965 systemd-networkd[776]: eth0: Link UP Apr 30 13:50:17.093971 systemd-networkd[776]: eth0: Gained carrier Apr 30 13:50:17.093983 systemd-networkd[776]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 13:50:17.094090 systemd[1]: Reached target network.target - Network. Apr 30 13:50:17.104584 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Apr 30 13:50:17.122035 ignition[780]: Ignition 2.20.0 Apr 30 13:50:17.122060 ignition[780]: Stage: fetch Apr 30 13:50:17.122285 ignition[780]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:50:17.122305 ignition[780]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 13:50:17.122446 ignition[780]: parsed url from cmdline: "" Apr 30 13:50:17.122453 ignition[780]: no config URL provided Apr 30 13:50:17.122463 ignition[780]: reading system config file "/usr/lib/ignition/user.ign" Apr 30 13:50:17.122479 ignition[780]: no config at "/usr/lib/ignition/user.ign" Apr 30 13:50:17.122584 ignition[780]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Apr 30 13:50:17.122638 ignition[780]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Apr 30 13:50:17.122659 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Apr 30 13:50:17.122972 ignition[780]: GET error: Get "http://169.254.169.254/openstack/latest/user_data": dial tcp 169.254.169.254:80: connect: network is unreachable Apr 30 13:50:17.153507 systemd-networkd[776]: eth0: DHCPv4 address 10.230.17.190/30, gateway 10.230.17.189 acquired from 10.230.17.189 Apr 30 13:50:17.323229 ignition[780]: GET http://169.254.169.254/openstack/latest/user_data: attempt #2 Apr 30 13:50:17.335241 ignition[780]: GET result: OK Apr 30 13:50:17.335550 ignition[780]: parsing config with SHA512: bc659eb51d71e257d1a534ae9433121b72765040269f69a299d69081854cb9964d3eb625ffb88fc2326b15b89d5ec37e17952017fe12d57eb88dff51c78c413b Apr 30 13:50:17.340502 unknown[780]: fetched base config from "system" Apr 30 13:50:17.340909 ignition[780]: fetch: fetch complete Apr 30 13:50:17.340520 unknown[780]: fetched base config from "system" Apr 30 13:50:17.340920 ignition[780]: fetch: fetch passed Apr 30 13:50:17.340529 unknown[780]: fetched user config from "openstack" Apr 30 13:50:17.341015 ignition[780]: Ignition finished successfully Apr 30 13:50:17.342882 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Apr 30 13:50:17.357730 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Apr 30 13:50:17.379675 ignition[787]: Ignition 2.20.0 Apr 30 13:50:17.380482 ignition[787]: Stage: kargs Apr 30 13:50:17.380769 ignition[787]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:50:17.380791 ignition[787]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 13:50:17.383626 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Apr 30 13:50:17.381692 ignition[787]: kargs: kargs passed Apr 30 13:50:17.381770 ignition[787]: Ignition finished successfully Apr 30 13:50:17.394772 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Apr 30 13:50:17.410469 ignition[793]: Ignition 2.20.0 Apr 30 13:50:17.410493 ignition[793]: Stage: disks Apr 30 13:50:17.410788 ignition[793]: no configs at "/usr/lib/ignition/base.d" Apr 30 13:50:17.412983 systemd[1]: Finished ignition-disks.service - Ignition (disks). Apr 30 13:50:17.410808 ignition[793]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 13:50:17.411770 ignition[793]: disks: disks passed Apr 30 13:50:17.411848 ignition[793]: Ignition finished successfully Apr 30 13:50:17.418163 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Apr 30 13:50:17.419035 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Apr 30 13:50:17.420953 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 13:50:17.422613 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 13:50:17.423966 systemd[1]: Reached target basic.target - Basic System. Apr 30 13:50:17.433696 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Apr 30 13:50:17.455331 systemd-fsck[801]: ROOT: clean, 14/1628000 files, 120691/1617920 blocks Apr 30 13:50:17.459356 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Apr 30 13:50:17.467694 systemd[1]: Mounting sysroot.mount - /sysroot... Apr 30 13:50:17.579417 kernel: EXT4-fs (vda9): mounted filesystem 59d16236-967d-47d1-a9bd-4b055a17ab77 r/w with ordered data mode. Quota mode: none. Apr 30 13:50:17.580063 systemd[1]: Mounted sysroot.mount - /sysroot. Apr 30 13:50:17.581417 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Apr 30 13:50:17.587514 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 13:50:17.594769 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Apr 30 13:50:17.596701 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Apr 30 13:50:17.599613 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Apr 30 13:50:17.603530 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Apr 30 13:50:17.615249 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (809) Apr 30 13:50:17.615295 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:50:17.615324 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:50:17.615344 kernel: BTRFS info (device vda6): using free space tree Apr 30 13:50:17.603587 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 13:50:17.622415 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 13:50:17.620896 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Apr 30 13:50:17.630652 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Apr 30 13:50:17.635218 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 13:50:17.700143 initrd-setup-root[834]: cut: /sysroot/etc/passwd: No such file or directory Apr 30 13:50:17.706886 initrd-setup-root[843]: cut: /sysroot/etc/group: No such file or directory Apr 30 13:50:17.716136 initrd-setup-root[850]: cut: /sysroot/etc/shadow: No such file or directory Apr 30 13:50:17.722855 initrd-setup-root[858]: cut: /sysroot/etc/gshadow: No such file or directory Apr 30 13:50:17.830059 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Apr 30 13:50:17.834557 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Apr 30 13:50:17.838580 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Apr 30 13:50:17.851436 kernel: BTRFS info (device vda6): last unmount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:50:17.851978 systemd[1]: sysroot-oem.mount: Deactivated successfully. Apr 30 13:50:17.886121 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Apr 30 13:50:17.891185 ignition[925]: INFO : Ignition 2.20.0 Apr 30 13:50:17.892848 ignition[925]: INFO : Stage: mount Apr 30 13:50:17.892848 ignition[925]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:50:17.892848 ignition[925]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 13:50:17.896457 ignition[925]: INFO : mount: mount passed Apr 30 13:50:17.896457 ignition[925]: INFO : Ignition finished successfully Apr 30 13:50:17.897677 systemd[1]: Finished ignition-mount.service - Ignition (mount). Apr 30 13:50:18.489074 systemd-networkd[776]: eth0: Gained IPv6LL Apr 30 13:50:20.000727 systemd-networkd[776]: eth0: Ignoring DHCPv6 address 2a02:1348:179:846f:24:19ff:fee6:11be/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:846f:24:19ff:fee6:11be/64 assigned by NDisc. Apr 30 13:50:20.000743 systemd-networkd[776]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 30 13:50:24.769899 coreos-metadata[811]: Apr 30 13:50:24.769 WARN failed to locate config-drive, using the metadata service API instead Apr 30 13:50:24.794091 coreos-metadata[811]: Apr 30 13:50:24.794 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 30 13:50:24.808259 coreos-metadata[811]: Apr 30 13:50:24.808 INFO Fetch successful Apr 30 13:50:24.809122 coreos-metadata[811]: Apr 30 13:50:24.808 INFO wrote hostname srv-2wmf7.gb1.brightbox.com to /sysroot/etc/hostname Apr 30 13:50:24.811189 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Apr 30 13:50:24.811403 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Apr 30 13:50:24.818554 systemd[1]: Starting ignition-files.service - Ignition (files)... Apr 30 13:50:24.834607 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Apr 30 13:50:24.848463 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (942) Apr 30 13:50:24.852783 kernel: BTRFS info (device vda6): first mount of filesystem 70902d85-577c-4d48-8616-61ed6d6784d1 Apr 30 13:50:24.852829 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Apr 30 13:50:24.853839 kernel: BTRFS info (device vda6): using free space tree Apr 30 13:50:24.859734 kernel: BTRFS info (device vda6): auto enabling async discard Apr 30 13:50:24.861931 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Apr 30 13:50:24.887780 ignition[960]: INFO : Ignition 2.20.0 Apr 30 13:50:24.887780 ignition[960]: INFO : Stage: files Apr 30 13:50:24.889557 ignition[960]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:50:24.889557 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 13:50:24.889557 ignition[960]: DEBUG : files: compiled without relabeling support, skipping Apr 30 13:50:24.892326 ignition[960]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Apr 30 13:50:24.892326 ignition[960]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Apr 30 13:50:24.908337 ignition[960]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Apr 30 13:50:24.909678 ignition[960]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Apr 30 13:50:24.910892 unknown[960]: wrote ssh authorized keys file for user: core Apr 30 13:50:24.912073 ignition[960]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Apr 30 13:50:24.913086 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Apr 30 13:50:24.913086 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Apr 30 13:50:24.913086 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 13:50:24.916565 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Apr 30 13:50:24.916565 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 13:50:24.916565 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 13:50:24.916565 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 13:50:24.916565 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Apr 30 13:50:25.518306 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Apr 30 13:50:26.776482 ignition[960]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Apr 30 13:50:26.780937 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Apr 30 13:50:26.780937 ignition[960]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Apr 30 13:50:26.780937 ignition[960]: INFO : files: files passed Apr 30 13:50:26.780937 ignition[960]: INFO : Ignition finished successfully Apr 30 13:50:26.782932 systemd[1]: Finished ignition-files.service - Ignition (files). Apr 30 13:50:26.800803 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Apr 30 13:50:26.803603 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Apr 30 13:50:26.808771 systemd[1]: ignition-quench.service: Deactivated successfully. Apr 30 13:50:26.808965 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Apr 30 13:50:26.826546 initrd-setup-root-after-ignition[988]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:50:26.826546 initrd-setup-root-after-ignition[988]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:50:26.829738 initrd-setup-root-after-ignition[992]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Apr 30 13:50:26.832279 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 13:50:26.833594 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Apr 30 13:50:26.839638 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Apr 30 13:50:26.872308 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Apr 30 13:50:26.872550 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Apr 30 13:50:26.874853 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Apr 30 13:50:26.875829 systemd[1]: Reached target initrd.target - Initrd Default Target. Apr 30 13:50:26.877352 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Apr 30 13:50:26.883620 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Apr 30 13:50:26.902770 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 13:50:26.910619 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Apr 30 13:50:26.927492 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:50:26.928418 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:50:26.929329 systemd[1]: Stopped target timers.target - Timer Units. Apr 30 13:50:26.930165 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Apr 30 13:50:26.930350 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Apr 30 13:50:26.932447 systemd[1]: Stopped target initrd.target - Initrd Default Target. Apr 30 13:50:26.933354 systemd[1]: Stopped target basic.target - Basic System. Apr 30 13:50:26.934656 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Apr 30 13:50:26.935999 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Apr 30 13:50:26.937573 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Apr 30 13:50:26.938967 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Apr 30 13:50:26.940305 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Apr 30 13:50:26.942199 systemd[1]: Stopped target sysinit.target - System Initialization. Apr 30 13:50:26.943752 systemd[1]: Stopped target local-fs.target - Local File Systems. Apr 30 13:50:26.945208 systemd[1]: Stopped target swap.target - Swaps. Apr 30 13:50:26.946547 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Apr 30 13:50:26.946764 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Apr 30 13:50:26.949338 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:50:26.950296 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:50:26.951757 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Apr 30 13:50:26.951941 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:50:26.953264 systemd[1]: dracut-initqueue.service: Deactivated successfully. Apr 30 13:50:26.953585 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Apr 30 13:50:26.955218 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Apr 30 13:50:26.955427 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Apr 30 13:50:26.957139 systemd[1]: ignition-files.service: Deactivated successfully. Apr 30 13:50:26.957300 systemd[1]: Stopped ignition-files.service - Ignition (files). Apr 30 13:50:26.966757 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Apr 30 13:50:26.967588 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Apr 30 13:50:26.967843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:50:26.972735 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Apr 30 13:50:26.974478 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Apr 30 13:50:26.974754 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:50:26.977697 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Apr 30 13:50:26.979640 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Apr 30 13:50:26.993731 ignition[1012]: INFO : Ignition 2.20.0 Apr 30 13:50:26.994777 systemd[1]: initrd-cleanup.service: Deactivated successfully. Apr 30 13:50:26.994958 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Apr 30 13:50:26.999951 ignition[1012]: INFO : Stage: umount Apr 30 13:50:26.999951 ignition[1012]: INFO : no configs at "/usr/lib/ignition/base.d" Apr 30 13:50:26.999951 ignition[1012]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Apr 30 13:50:26.999951 ignition[1012]: INFO : umount: umount passed Apr 30 13:50:26.999951 ignition[1012]: INFO : Ignition finished successfully Apr 30 13:50:27.004937 systemd[1]: ignition-mount.service: Deactivated successfully. Apr 30 13:50:27.005111 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Apr 30 13:50:27.008194 systemd[1]: ignition-disks.service: Deactivated successfully. Apr 30 13:50:27.008331 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Apr 30 13:50:27.010759 systemd[1]: ignition-kargs.service: Deactivated successfully. Apr 30 13:50:27.010856 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Apr 30 13:50:27.011702 systemd[1]: ignition-fetch.service: Deactivated successfully. Apr 30 13:50:27.011778 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Apr 30 13:50:27.013853 systemd[1]: Stopped target network.target - Network. Apr 30 13:50:27.014501 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Apr 30 13:50:27.014582 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Apr 30 13:50:27.016944 systemd[1]: Stopped target paths.target - Path Units. Apr 30 13:50:27.018412 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Apr 30 13:50:27.023478 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:50:27.030548 systemd[1]: Stopped target slices.target - Slice Units. Apr 30 13:50:27.031335 systemd[1]: Stopped target sockets.target - Socket Units. Apr 30 13:50:27.032061 systemd[1]: iscsid.socket: Deactivated successfully. Apr 30 13:50:27.032143 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Apr 30 13:50:27.033581 systemd[1]: iscsiuio.socket: Deactivated successfully. Apr 30 13:50:27.033656 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Apr 30 13:50:27.035014 systemd[1]: ignition-setup.service: Deactivated successfully. Apr 30 13:50:27.035119 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Apr 30 13:50:27.036419 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Apr 30 13:50:27.036531 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Apr 30 13:50:27.038097 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Apr 30 13:50:27.039190 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Apr 30 13:50:27.044874 systemd-networkd[776]: eth0: DHCPv6 lease lost Apr 30 13:50:27.046320 systemd[1]: sysroot-boot.mount: Deactivated successfully. Apr 30 13:50:27.049231 systemd[1]: systemd-resolved.service: Deactivated successfully. Apr 30 13:50:27.049597 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Apr 30 13:50:27.056376 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Apr 30 13:50:27.056889 systemd[1]: systemd-networkd.service: Deactivated successfully. Apr 30 13:50:27.057059 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Apr 30 13:50:27.059781 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Apr 30 13:50:27.060146 systemd[1]: sysroot-boot.service: Deactivated successfully. Apr 30 13:50:27.060294 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Apr 30 13:50:27.063882 systemd[1]: systemd-networkd.socket: Deactivated successfully. Apr 30 13:50:27.063981 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:50:27.065653 systemd[1]: initrd-setup-root.service: Deactivated successfully. Apr 30 13:50:27.065760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Apr 30 13:50:27.074601 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Apr 30 13:50:27.075808 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Apr 30 13:50:27.075893 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Apr 30 13:50:27.080008 systemd[1]: systemd-sysctl.service: Deactivated successfully. Apr 30 13:50:27.080096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:50:27.081687 systemd[1]: systemd-modules-load.service: Deactivated successfully. Apr 30 13:50:27.081776 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Apr 30 13:50:27.083178 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Apr 30 13:50:27.083254 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:50:27.085468 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:50:27.088782 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Apr 30 13:50:27.088884 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:50:27.099316 systemd[1]: systemd-udevd.service: Deactivated successfully. Apr 30 13:50:27.099745 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:50:27.104075 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Apr 30 13:50:27.104154 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Apr 30 13:50:27.104954 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Apr 30 13:50:27.105016 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:50:27.107647 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Apr 30 13:50:27.107768 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Apr 30 13:50:27.109229 systemd[1]: dracut-cmdline.service: Deactivated successfully. Apr 30 13:50:27.109304 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Apr 30 13:50:27.110751 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Apr 30 13:50:27.110834 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Apr 30 13:50:27.122044 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Apr 30 13:50:27.122805 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Apr 30 13:50:27.122887 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:50:27.123798 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Apr 30 13:50:27.123871 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:50:27.126830 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Apr 30 13:50:27.126926 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Apr 30 13:50:27.127582 systemd[1]: network-cleanup.service: Deactivated successfully. Apr 30 13:50:27.127749 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Apr 30 13:50:27.133558 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Apr 30 13:50:27.133717 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Apr 30 13:50:27.135730 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Apr 30 13:50:27.146074 systemd[1]: Starting initrd-switch-root.service - Switch Root... Apr 30 13:50:27.155237 systemd[1]: Switching root. Apr 30 13:50:27.190935 systemd-journald[202]: Journal stopped Apr 30 13:50:28.882122 systemd-journald[202]: Received SIGTERM from PID 1 (systemd). Apr 30 13:50:28.882223 kernel: SELinux: policy capability network_peer_controls=1 Apr 30 13:50:28.882256 kernel: SELinux: policy capability open_perms=1 Apr 30 13:50:28.882283 kernel: SELinux: policy capability extended_socket_class=1 Apr 30 13:50:28.882308 kernel: SELinux: policy capability always_check_network=0 Apr 30 13:50:28.882327 kernel: SELinux: policy capability cgroup_seclabel=1 Apr 30 13:50:28.882363 kernel: SELinux: policy capability nnp_nosuid_transition=1 Apr 30 13:50:28.882385 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Apr 30 13:50:28.884458 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Apr 30 13:50:28.884487 kernel: audit: type=1403 audit(1746021027.511:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Apr 30 13:50:28.884509 systemd[1]: Successfully loaded SELinux policy in 53.429ms. Apr 30 13:50:28.884541 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 23.606ms. Apr 30 13:50:28.884565 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Apr 30 13:50:28.884585 systemd[1]: Detected virtualization kvm. Apr 30 13:50:28.884606 systemd[1]: Detected architecture x86-64. Apr 30 13:50:28.884643 systemd[1]: Detected first boot. Apr 30 13:50:28.884665 systemd[1]: Hostname set to . Apr 30 13:50:28.884685 systemd[1]: Initializing machine ID from VM UUID. Apr 30 13:50:28.884705 zram_generator::config[1059]: No configuration found. Apr 30 13:50:28.884725 kernel: Guest personality initialized and is inactive Apr 30 13:50:28.884745 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Apr 30 13:50:28.884764 kernel: Initialized host personality Apr 30 13:50:28.884782 kernel: NET: Registered PF_VSOCK protocol family Apr 30 13:50:28.884816 systemd[1]: Populated /etc with preset unit settings. Apr 30 13:50:28.884846 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Apr 30 13:50:28.884867 systemd[1]: initrd-switch-root.service: Deactivated successfully. Apr 30 13:50:28.884887 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Apr 30 13:50:28.884906 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Apr 30 13:50:28.884926 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Apr 30 13:50:28.884946 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Apr 30 13:50:28.884967 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Apr 30 13:50:28.884987 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Apr 30 13:50:28.885020 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Apr 30 13:50:28.885042 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Apr 30 13:50:28.885070 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Apr 30 13:50:28.885092 systemd[1]: Created slice user.slice - User and Session Slice. Apr 30 13:50:28.885112 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Apr 30 13:50:28.885132 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Apr 30 13:50:28.885152 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Apr 30 13:50:28.885172 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Apr 30 13:50:28.885206 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Apr 30 13:50:28.885229 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Apr 30 13:50:28.885250 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Apr 30 13:50:28.885270 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Apr 30 13:50:28.885290 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Apr 30 13:50:28.885310 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Apr 30 13:50:28.885330 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Apr 30 13:50:28.885361 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Apr 30 13:50:28.885383 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Apr 30 13:50:28.885420 systemd[1]: Reached target remote-fs.target - Remote File Systems. Apr 30 13:50:28.885442 systemd[1]: Reached target slices.target - Slice Units. Apr 30 13:50:28.885504 systemd[1]: Reached target swap.target - Swaps. Apr 30 13:50:28.885530 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Apr 30 13:50:28.885551 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Apr 30 13:50:28.885570 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Apr 30 13:50:28.885591 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Apr 30 13:50:28.885625 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Apr 30 13:50:28.885648 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Apr 30 13:50:28.885668 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Apr 30 13:50:28.885688 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Apr 30 13:50:28.885708 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Apr 30 13:50:28.885750 systemd[1]: Mounting media.mount - External Media Directory... Apr 30 13:50:28.885791 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:50:28.885814 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Apr 30 13:50:28.885835 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Apr 30 13:50:28.885855 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Apr 30 13:50:28.885877 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Apr 30 13:50:28.885897 systemd[1]: Reached target machines.target - Containers. Apr 30 13:50:28.885917 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Apr 30 13:50:28.885943 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:50:28.885977 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Apr 30 13:50:28.885999 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Apr 30 13:50:28.886020 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 13:50:28.886040 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 13:50:28.886060 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 13:50:28.886080 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Apr 30 13:50:28.886100 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 13:50:28.886121 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Apr 30 13:50:28.886153 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Apr 30 13:50:28.886176 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Apr 30 13:50:28.886195 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Apr 30 13:50:28.886215 systemd[1]: Stopped systemd-fsck-usr.service. Apr 30 13:50:28.886235 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:50:28.886255 systemd[1]: Starting systemd-journald.service - Journal Service... Apr 30 13:50:28.886275 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Apr 30 13:50:28.886295 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Apr 30 13:50:28.886314 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Apr 30 13:50:28.886347 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Apr 30 13:50:28.886368 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Apr 30 13:50:28.888411 systemd[1]: verity-setup.service: Deactivated successfully. Apr 30 13:50:28.888471 systemd[1]: Stopped verity-setup.service. Apr 30 13:50:28.888497 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:50:28.888526 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Apr 30 13:50:28.888547 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Apr 30 13:50:28.888582 systemd[1]: Mounted media.mount - External Media Directory. Apr 30 13:50:28.888604 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Apr 30 13:50:28.888624 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Apr 30 13:50:28.888659 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Apr 30 13:50:28.888681 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Apr 30 13:50:28.888701 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Apr 30 13:50:28.888721 systemd[1]: modprobe@configfs.service: Deactivated successfully. Apr 30 13:50:28.888742 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Apr 30 13:50:28.888762 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 13:50:28.888781 kernel: loop: module loaded Apr 30 13:50:28.888808 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 13:50:28.888828 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 13:50:28.888911 systemd-journald[1153]: Collecting audit messages is disabled. Apr 30 13:50:28.888950 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 13:50:28.888973 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Apr 30 13:50:28.888994 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Apr 30 13:50:28.889015 systemd-journald[1153]: Journal started Apr 30 13:50:28.889049 systemd-journald[1153]: Runtime Journal (/run/log/journal/7086ba509ae24c08a3b847b92d444e8f) is 4.7M, max 37.9M, 33.2M free. Apr 30 13:50:28.445875 systemd[1]: Queued start job for default target multi-user.target. Apr 30 13:50:28.459813 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Apr 30 13:50:28.460599 systemd[1]: systemd-journald.service: Deactivated successfully. Apr 30 13:50:28.902789 systemd[1]: Started systemd-journald.service - Journal Service. Apr 30 13:50:28.905076 kernel: fuse: init (API version 7.39) Apr 30 13:50:28.904772 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 13:50:28.905110 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 13:50:28.906336 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Apr 30 13:50:28.910462 systemd[1]: modprobe@fuse.service: Deactivated successfully. Apr 30 13:50:28.910851 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Apr 30 13:50:28.930545 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Apr 30 13:50:28.933585 kernel: ACPI: bus type drm_connector registered Apr 30 13:50:28.935620 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 13:50:28.935969 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 13:50:28.939490 systemd[1]: Reached target network-pre.target - Preparation for Network. Apr 30 13:50:28.950488 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Apr 30 13:50:28.960486 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Apr 30 13:50:28.961514 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Apr 30 13:50:28.961568 systemd[1]: Reached target local-fs.target - Local File Systems. Apr 30 13:50:28.964510 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Apr 30 13:50:28.971558 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Apr 30 13:50:28.979560 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Apr 30 13:50:28.980560 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:50:28.988644 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Apr 30 13:50:29.000518 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Apr 30 13:50:29.001350 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 13:50:29.008621 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Apr 30 13:50:29.009442 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 13:50:29.015650 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Apr 30 13:50:29.023679 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Apr 30 13:50:29.026791 systemd[1]: Starting systemd-sysusers.service - Create System Users... Apr 30 13:50:29.032327 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Apr 30 13:50:29.033254 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Apr 30 13:50:29.034478 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Apr 30 13:50:29.042258 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Apr 30 13:50:29.043547 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Apr 30 13:50:29.050733 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Apr 30 13:50:29.061078 systemd-journald[1153]: Time spent on flushing to /var/log/journal/7086ba509ae24c08a3b847b92d444e8f is 82.271ms for 1142 entries. Apr 30 13:50:29.061078 systemd-journald[1153]: System Journal (/var/log/journal/7086ba509ae24c08a3b847b92d444e8f) is 8M, max 584.8M, 576.8M free. Apr 30 13:50:29.166255 systemd-journald[1153]: Received client request to flush runtime journal. Apr 30 13:50:29.166325 kernel: loop0: detected capacity change from 0 to 147912 Apr 30 13:50:29.091531 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Apr 30 13:50:29.149491 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Apr 30 13:50:29.169645 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Apr 30 13:50:29.190419 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Apr 30 13:50:29.220612 kernel: loop1: detected capacity change from 0 to 8 Apr 30 13:50:29.239204 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Apr 30 13:50:29.241114 systemd[1]: Finished systemd-sysusers.service - Create System Users. Apr 30 13:50:29.256827 kernel: loop2: detected capacity change from 0 to 205544 Apr 30 13:50:29.257014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Apr 30 13:50:29.261179 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Apr 30 13:50:29.329429 kernel: loop3: detected capacity change from 0 to 138176 Apr 30 13:50:29.327844 udevadm[1218]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Apr 30 13:50:29.346633 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 30 13:50:29.346663 systemd-tmpfiles[1217]: ACLs are not supported, ignoring. Apr 30 13:50:29.370963 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Apr 30 13:50:29.386960 kernel: loop4: detected capacity change from 0 to 147912 Apr 30 13:50:29.411472 kernel: loop5: detected capacity change from 0 to 8 Apr 30 13:50:29.420507 kernel: loop6: detected capacity change from 0 to 205544 Apr 30 13:50:29.443505 kernel: loop7: detected capacity change from 0 to 138176 Apr 30 13:50:29.461561 (sd-merge)[1224]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-openstack'. Apr 30 13:50:29.462540 (sd-merge)[1224]: Merged extensions into '/usr'. Apr 30 13:50:29.464331 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Apr 30 13:50:29.471378 systemd[1]: Reload requested from client PID 1198 ('systemd-sysext') (unit systemd-sysext.service)... Apr 30 13:50:29.471568 systemd[1]: Reloading... Apr 30 13:50:29.617696 zram_generator::config[1252]: No configuration found. Apr 30 13:50:29.821756 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:50:29.862096 ldconfig[1193]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Apr 30 13:50:29.933999 systemd[1]: Reloading finished in 461 ms. Apr 30 13:50:29.959353 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Apr 30 13:50:29.960989 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Apr 30 13:50:29.975735 systemd[1]: Starting ensure-sysext.service... Apr 30 13:50:29.990435 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Apr 30 13:50:30.019713 systemd[1]: Reload requested from client PID 1308 ('systemctl') (unit ensure-sysext.service)... Apr 30 13:50:30.019738 systemd[1]: Reloading... Apr 30 13:50:30.063307 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Apr 30 13:50:30.066900 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Apr 30 13:50:30.070859 systemd-tmpfiles[1309]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Apr 30 13:50:30.071278 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Apr 30 13:50:30.073265 systemd-tmpfiles[1309]: ACLs are not supported, ignoring. Apr 30 13:50:30.082740 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 13:50:30.082760 systemd-tmpfiles[1309]: Skipping /boot Apr 30 13:50:30.157441 zram_generator::config[1335]: No configuration found. Apr 30 13:50:30.157149 systemd-tmpfiles[1309]: Detected autofs mount point /boot during canonicalization of boot. Apr 30 13:50:30.157171 systemd-tmpfiles[1309]: Skipping /boot Apr 30 13:50:30.341151 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:50:30.439375 systemd[1]: Reloading finished in 419 ms. Apr 30 13:50:30.455485 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Apr 30 13:50:30.473552 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Apr 30 13:50:30.490855 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:50:30.494738 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Apr 30 13:50:30.499762 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Apr 30 13:50:30.504938 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Apr 30 13:50:30.510755 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Apr 30 13:50:30.522021 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Apr 30 13:50:30.528286 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:50:30.528617 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:50:30.538772 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Apr 30 13:50:30.542330 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Apr 30 13:50:30.546760 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Apr 30 13:50:30.548640 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:50:30.548824 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:50:30.548990 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:50:30.556078 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:50:30.556363 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:50:30.557683 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:50:30.557838 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:50:30.557974 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:50:30.567882 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Apr 30 13:50:30.568210 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Apr 30 13:50:30.576867 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:50:30.577229 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Apr 30 13:50:30.587641 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Apr 30 13:50:30.588970 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Apr 30 13:50:30.589047 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Apr 30 13:50:30.589166 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Apr 30 13:50:30.589262 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Apr 30 13:50:30.591486 systemd[1]: Finished ensure-sysext.service. Apr 30 13:50:30.615854 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Apr 30 13:50:30.628952 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Apr 30 13:50:30.632834 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Apr 30 13:50:30.635689 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Apr 30 13:50:30.637160 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Apr 30 13:50:30.638474 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Apr 30 13:50:30.640274 systemd[1]: modprobe@loop.service: Deactivated successfully. Apr 30 13:50:30.641602 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Apr 30 13:50:30.644104 systemd[1]: modprobe@drm.service: Deactivated successfully. Apr 30 13:50:30.644381 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Apr 30 13:50:30.660839 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Apr 30 13:50:30.667806 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Apr 30 13:50:30.668370 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Apr 30 13:50:30.669661 augenrules[1434]: No rules Apr 30 13:50:30.676609 systemd[1]: Starting systemd-update-done.service - Update is Completed... Apr 30 13:50:30.677774 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Apr 30 13:50:30.678321 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:50:30.679716 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:50:30.713445 systemd[1]: Finished systemd-update-done.service - Update is Completed. Apr 30 13:50:30.715578 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Apr 30 13:50:30.733645 systemd[1]: Starting systemd-networkd.service - Network Configuration... Apr 30 13:50:30.747501 systemd[1]: Started systemd-userdbd.service - User Database Manager. Apr 30 13:50:30.914200 systemd-resolved[1400]: Positive Trust Anchors: Apr 30 13:50:30.914888 systemd-resolved[1400]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Apr 30 13:50:30.915027 systemd-resolved[1400]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Apr 30 13:50:30.924248 systemd-resolved[1400]: Using system hostname 'srv-2wmf7.gb1.brightbox.com'. Apr 30 13:50:30.931095 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Apr 30 13:50:30.935166 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Apr 30 13:50:30.943559 systemd-networkd[1453]: lo: Link UP Apr 30 13:50:30.943573 systemd-networkd[1453]: lo: Gained carrier Apr 30 13:50:30.947029 systemd-networkd[1453]: Enumeration completed Apr 30 13:50:30.947925 systemd[1]: Started systemd-networkd.service - Network Configuration. Apr 30 13:50:30.950591 systemd[1]: Reached target network.target - Network. Apr 30 13:50:30.956663 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Apr 30 13:50:30.967551 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Apr 30 13:50:30.969671 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Apr 30 13:50:30.970787 systemd[1]: Reached target time-set.target - System Time Set. Apr 30 13:50:31.004306 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Apr 30 13:50:31.005652 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Apr 30 13:50:31.051445 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1454) Apr 30 13:50:31.081966 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 13:50:31.081982 systemd-networkd[1453]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Apr 30 13:50:31.085977 systemd-networkd[1453]: eth0: Link UP Apr 30 13:50:31.085991 systemd-networkd[1453]: eth0: Gained carrier Apr 30 13:50:31.086019 systemd-networkd[1453]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Apr 30 13:50:31.106518 systemd-networkd[1453]: eth0: DHCPv4 address 10.230.17.190/30, gateway 10.230.17.189 acquired from 10.230.17.189 Apr 30 13:50:31.107659 systemd-timesyncd[1423]: Network configuration changed, trying to establish connection. Apr 30 13:50:31.136438 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Apr 30 13:50:31.154459 kernel: ACPI: button: Power Button [PWRF] Apr 30 13:50:31.196692 kernel: mousedev: PS/2 mouse device common for all mice Apr 30 13:50:31.215643 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Apr 30 13:50:31.216321 kernel: i2c i2c-0: 1/1 memory slots populated (from DMI) Apr 30 13:50:31.216922 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Apr 30 13:50:31.214713 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Apr 30 13:50:31.225453 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Apr 30 13:50:31.235152 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Apr 30 13:50:31.261433 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Apr 30 13:50:31.360613 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Apr 30 13:50:32.668657 systemd-timesyncd[1423]: Contacted time server 131.111.8.60:123 (0.flatcar.pool.ntp.org). Apr 30 13:50:32.668776 systemd-timesyncd[1423]: Initial clock synchronization to Wed 2025-04-30 13:50:32.668336 UTC. Apr 30 13:50:32.668940 systemd-resolved[1400]: Clock change detected. Flushing caches. Apr 30 13:50:32.730095 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Apr 30 13:50:32.740532 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Apr 30 13:50:32.807958 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Apr 30 13:50:32.824684 lvm[1489]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 13:50:32.859984 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Apr 30 13:50:32.861618 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Apr 30 13:50:32.862421 systemd[1]: Reached target sysinit.target - System Initialization. Apr 30 13:50:32.863554 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Apr 30 13:50:32.864410 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Apr 30 13:50:32.865513 systemd[1]: Started logrotate.timer - Daily rotation of log files. Apr 30 13:50:32.866471 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Apr 30 13:50:32.867285 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Apr 30 13:50:32.868117 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Apr 30 13:50:32.868195 systemd[1]: Reached target paths.target - Path Units. Apr 30 13:50:32.868963 systemd[1]: Reached target timers.target - Timer Units. Apr 30 13:50:32.871634 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Apr 30 13:50:32.875478 systemd[1]: Starting docker.socket - Docker Socket for the API... Apr 30 13:50:32.880057 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Apr 30 13:50:32.881123 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Apr 30 13:50:32.881912 systemd[1]: Reached target ssh-access.target - SSH Access Available. Apr 30 13:50:32.891167 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Apr 30 13:50:32.892621 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Apr 30 13:50:32.902689 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Apr 30 13:50:32.904573 systemd[1]: Listening on docker.socket - Docker Socket for the API. Apr 30 13:50:32.905474 systemd[1]: Reached target sockets.target - Socket Units. Apr 30 13:50:32.906129 systemd[1]: Reached target basic.target - Basic System. Apr 30 13:50:32.906966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Apr 30 13:50:32.907021 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Apr 30 13:50:32.910190 lvm[1494]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Apr 30 13:50:32.910398 systemd[1]: Starting containerd.service - containerd container runtime... Apr 30 13:50:32.922149 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Apr 30 13:50:32.927607 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Apr 30 13:50:32.936374 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Apr 30 13:50:32.941819 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Apr 30 13:50:32.943141 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Apr 30 13:50:32.946730 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Apr 30 13:50:32.952437 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Apr 30 13:50:32.955516 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Apr 30 13:50:32.963623 systemd[1]: Starting systemd-logind.service - User Login Management... Apr 30 13:50:32.966358 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Apr 30 13:50:32.969302 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Apr 30 13:50:32.976230 systemd[1]: Starting update-engine.service - Update Engine... Apr 30 13:50:32.983919 jq[1498]: false Apr 30 13:50:32.981401 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Apr 30 13:50:32.984411 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Apr 30 13:50:32.998020 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Apr 30 13:50:32.998399 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Apr 30 13:50:33.017564 jq[1507]: true Apr 30 13:50:33.032906 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Apr 30 13:50:33.033733 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Apr 30 13:50:33.052843 (ntainerd)[1523]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Apr 30 13:50:33.062643 update_engine[1506]: I20250430 13:50:33.061506 1506 main.cc:92] Flatcar Update Engine starting Apr 30 13:50:33.074327 jq[1515]: true Apr 30 13:50:33.084066 dbus-daemon[1497]: [system] SELinux support is enabled Apr 30 13:50:33.080968 systemd[1]: motdgen.service: Deactivated successfully. Apr 30 13:50:33.107169 extend-filesystems[1499]: Found loop4 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found loop5 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found loop6 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found loop7 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found vda Apr 30 13:50:33.107169 extend-filesystems[1499]: Found vda1 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found vda2 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found vda3 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found usr Apr 30 13:50:33.107169 extend-filesystems[1499]: Found vda4 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found vda6 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found vda7 Apr 30 13:50:33.107169 extend-filesystems[1499]: Found vda9 Apr 30 13:50:33.107169 extend-filesystems[1499]: Checking size of /dev/vda9 Apr 30 13:50:33.170336 update_engine[1506]: I20250430 13:50:33.095400 1506 update_check_scheduler.cc:74] Next update check in 8m28s Apr 30 13:50:33.094783 dbus-daemon[1497]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1453 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Apr 30 13:50:33.081394 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Apr 30 13:50:33.111555 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.systemd1' Apr 30 13:50:33.085195 systemd[1]: Started dbus.service - D-Bus System Message Bus. Apr 30 13:50:33.108792 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Apr 30 13:50:33.108855 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Apr 30 13:50:33.111842 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Apr 30 13:50:33.111878 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Apr 30 13:50:33.113458 systemd[1]: Started update-engine.service - Update Engine. Apr 30 13:50:33.133533 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Apr 30 13:50:33.148565 systemd[1]: Started locksmithd.service - Cluster reboot manager. Apr 30 13:50:33.198046 extend-filesystems[1499]: Resized partition /dev/vda9 Apr 30 13:50:33.208287 extend-filesystems[1549]: resize2fs 1.47.1 (20-May-2024) Apr 30 13:50:33.233772 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 15121403 blocks Apr 30 13:50:33.295331 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1456) Apr 30 13:50:33.343822 systemd-logind[1505]: Watching system buttons on /dev/input/event2 (Power Button) Apr 30 13:50:33.343898 systemd-logind[1505]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Apr 30 13:50:33.358233 bash[1550]: Updated "/home/core/.ssh/authorized_keys" Apr 30 13:50:33.364593 systemd-logind[1505]: New seat seat0. Apr 30 13:50:33.368348 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Apr 30 13:50:33.378469 systemd[1]: Starting sshkeys.service... Apr 30 13:50:33.379217 systemd[1]: Started systemd-logind.service - User Login Management. Apr 30 13:50:33.413219 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Apr 30 13:50:33.422717 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Apr 30 13:50:33.580278 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Apr 30 13:50:33.587762 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Apr 30 13:50:33.590825 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.hostname1' Apr 30 13:50:33.593664 dbus-daemon[1497]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1532 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Apr 30 13:50:33.610852 systemd[1]: Starting polkit.service - Authorization Manager... Apr 30 13:50:33.617118 extend-filesystems[1549]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Apr 30 13:50:33.617118 extend-filesystems[1549]: old_desc_blocks = 1, new_desc_blocks = 8 Apr 30 13:50:33.617118 extend-filesystems[1549]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Apr 30 13:50:33.629609 extend-filesystems[1499]: Resized filesystem in /dev/vda9 Apr 30 13:50:33.619502 systemd[1]: extend-filesystems.service: Deactivated successfully. Apr 30 13:50:33.619879 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Apr 30 13:50:33.632719 polkitd[1565]: Started polkitd version 121 Apr 30 13:50:33.635872 containerd[1523]: time="2025-04-30T13:50:33.635728213Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Apr 30 13:50:33.645317 polkitd[1565]: Loading rules from directory /etc/polkit-1/rules.d Apr 30 13:50:33.645443 polkitd[1565]: Loading rules from directory /usr/share/polkit-1/rules.d Apr 30 13:50:33.647786 polkitd[1565]: Finished loading, compiling and executing 2 rules Apr 30 13:50:33.653131 dbus-daemon[1497]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Apr 30 13:50:33.653626 systemd[1]: Started polkit.service - Authorization Manager. Apr 30 13:50:33.653781 polkitd[1565]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Apr 30 13:50:33.711712 systemd-hostnamed[1532]: Hostname set to (static) Apr 30 13:50:33.717887 locksmithd[1537]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Apr 30 13:50:33.719912 containerd[1523]: time="2025-04-30T13:50:33.713904037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:50:33.719912 containerd[1523]: time="2025-04-30T13:50:33.718457414Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.88-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:50:33.719912 containerd[1523]: time="2025-04-30T13:50:33.718510576Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Apr 30 13:50:33.719912 containerd[1523]: time="2025-04-30T13:50:33.718539722Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Apr 30 13:50:33.719912 containerd[1523]: time="2025-04-30T13:50:33.719435836Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Apr 30 13:50:33.719912 containerd[1523]: time="2025-04-30T13:50:33.719474052Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Apr 30 13:50:33.719912 containerd[1523]: time="2025-04-30T13:50:33.719605652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:50:33.719912 containerd[1523]: time="2025-04-30T13:50:33.719628660Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:50:33.720295 containerd[1523]: time="2025-04-30T13:50:33.720039607Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:50:33.720295 containerd[1523]: time="2025-04-30T13:50:33.720064677Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Apr 30 13:50:33.720295 containerd[1523]: time="2025-04-30T13:50:33.720087135Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:50:33.720295 containerd[1523]: time="2025-04-30T13:50:33.720102997Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Apr 30 13:50:33.724315 containerd[1523]: time="2025-04-30T13:50:33.724274418Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:50:33.724803 containerd[1523]: time="2025-04-30T13:50:33.724756398Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Apr 30 13:50:33.724984 containerd[1523]: time="2025-04-30T13:50:33.724943981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Apr 30 13:50:33.724984 containerd[1523]: time="2025-04-30T13:50:33.724979004Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Apr 30 13:50:33.725193 containerd[1523]: time="2025-04-30T13:50:33.725156737Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Apr 30 13:50:33.725329 containerd[1523]: time="2025-04-30T13:50:33.725295777Z" level=info msg="metadata content store policy set" policy=shared Apr 30 13:50:33.734376 containerd[1523]: time="2025-04-30T13:50:33.734335255Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Apr 30 13:50:33.734567 containerd[1523]: time="2025-04-30T13:50:33.734425876Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Apr 30 13:50:33.734567 containerd[1523]: time="2025-04-30T13:50:33.734459799Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Apr 30 13:50:33.734567 containerd[1523]: time="2025-04-30T13:50:33.734491086Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Apr 30 13:50:33.734567 containerd[1523]: time="2025-04-30T13:50:33.734525615Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Apr 30 13:50:33.734828 containerd[1523]: time="2025-04-30T13:50:33.734795128Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Apr 30 13:50:33.735208 containerd[1523]: time="2025-04-30T13:50:33.735181375Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Apr 30 13:50:33.735441 containerd[1523]: time="2025-04-30T13:50:33.735414616Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Apr 30 13:50:33.735509 containerd[1523]: time="2025-04-30T13:50:33.735448124Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Apr 30 13:50:33.735509 containerd[1523]: time="2025-04-30T13:50:33.735472460Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Apr 30 13:50:33.735509 containerd[1523]: time="2025-04-30T13:50:33.735495058Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Apr 30 13:50:33.735609 containerd[1523]: time="2025-04-30T13:50:33.735517504Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Apr 30 13:50:33.735609 containerd[1523]: time="2025-04-30T13:50:33.735538026Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Apr 30 13:50:33.735609 containerd[1523]: time="2025-04-30T13:50:33.735560548Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Apr 30 13:50:33.735609 containerd[1523]: time="2025-04-30T13:50:33.735596611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Apr 30 13:50:33.735773 containerd[1523]: time="2025-04-30T13:50:33.735627364Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Apr 30 13:50:33.735773 containerd[1523]: time="2025-04-30T13:50:33.735650506Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Apr 30 13:50:33.735773 containerd[1523]: time="2025-04-30T13:50:33.735668611Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Apr 30 13:50:33.735773 containerd[1523]: time="2025-04-30T13:50:33.735699628Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.735773 containerd[1523]: time="2025-04-30T13:50:33.735724144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.735773 containerd[1523]: time="2025-04-30T13:50:33.735744486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.735773 containerd[1523]: time="2025-04-30T13:50:33.735770750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.735792663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.735813494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.735831682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.735851545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.735871226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.735894527Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.735944176Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.735968658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.736017270Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736051 containerd[1523]: time="2025-04-30T13:50:33.736042966Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Apr 30 13:50:33.736706 containerd[1523]: time="2025-04-30T13:50:33.736083677Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736706 containerd[1523]: time="2025-04-30T13:50:33.736109421Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.736706 containerd[1523]: time="2025-04-30T13:50:33.736127637Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Apr 30 13:50:33.736706 containerd[1523]: time="2025-04-30T13:50:33.736219452Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Apr 30 13:50:33.737453 containerd[1523]: time="2025-04-30T13:50:33.736977968Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Apr 30 13:50:33.737453 containerd[1523]: time="2025-04-30T13:50:33.737023607Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Apr 30 13:50:33.737453 containerd[1523]: time="2025-04-30T13:50:33.737049774Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Apr 30 13:50:33.737453 containerd[1523]: time="2025-04-30T13:50:33.737067063Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.737453 containerd[1523]: time="2025-04-30T13:50:33.737089237Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Apr 30 13:50:33.737453 containerd[1523]: time="2025-04-30T13:50:33.737117748Z" level=info msg="NRI interface is disabled by configuration." Apr 30 13:50:33.737453 containerd[1523]: time="2025-04-30T13:50:33.737145030Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Apr 30 13:50:33.738494 containerd[1523]: time="2025-04-30T13:50:33.738047931Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Apr 30 13:50:33.738494 containerd[1523]: time="2025-04-30T13:50:33.738153265Z" level=info msg="Connect containerd service" Apr 30 13:50:33.738494 containerd[1523]: time="2025-04-30T13:50:33.738213054Z" level=info msg="using legacy CRI server" Apr 30 13:50:33.738494 containerd[1523]: time="2025-04-30T13:50:33.738239789Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Apr 30 13:50:33.739643 containerd[1523]: time="2025-04-30T13:50:33.739013114Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Apr 30 13:50:33.740278 containerd[1523]: time="2025-04-30T13:50:33.740219518Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Apr 30 13:50:33.740624 containerd[1523]: time="2025-04-30T13:50:33.740558549Z" level=info msg="Start subscribing containerd event" Apr 30 13:50:33.741365 containerd[1523]: time="2025-04-30T13:50:33.740727462Z" level=info msg="Start recovering state" Apr 30 13:50:33.741365 containerd[1523]: time="2025-04-30T13:50:33.740871051Z" level=info msg="Start event monitor" Apr 30 13:50:33.741365 containerd[1523]: time="2025-04-30T13:50:33.740907303Z" level=info msg="Start snapshots syncer" Apr 30 13:50:33.741365 containerd[1523]: time="2025-04-30T13:50:33.740937257Z" level=info msg="Start cni network conf syncer for default" Apr 30 13:50:33.741365 containerd[1523]: time="2025-04-30T13:50:33.740952330Z" level=info msg="Start streaming server" Apr 30 13:50:33.742206 containerd[1523]: time="2025-04-30T13:50:33.742180121Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Apr 30 13:50:33.742400 containerd[1523]: time="2025-04-30T13:50:33.742376331Z" level=info msg=serving... address=/run/containerd/containerd.sock Apr 30 13:50:33.743631 containerd[1523]: time="2025-04-30T13:50:33.743576190Z" level=info msg="containerd successfully booted in 0.111410s" Apr 30 13:50:33.743815 systemd[1]: Started containerd.service - containerd container runtime. Apr 30 13:50:33.924999 sshd_keygen[1526]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Apr 30 13:50:33.956221 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Apr 30 13:50:33.966764 systemd[1]: Starting issuegen.service - Generate /run/issue... Apr 30 13:50:33.989680 systemd[1]: issuegen.service: Deactivated successfully. Apr 30 13:50:33.990145 systemd[1]: Finished issuegen.service - Generate /run/issue. Apr 30 13:50:33.997658 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Apr 30 13:50:34.017258 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Apr 30 13:50:34.024838 systemd[1]: Started getty@tty1.service - Getty on tty1. Apr 30 13:50:34.035812 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Apr 30 13:50:34.040124 systemd[1]: Reached target getty.target - Login Prompts. Apr 30 13:50:34.059001 systemd-networkd[1453]: eth0: Gained IPv6LL Apr 30 13:50:34.063959 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Apr 30 13:50:34.066636 systemd[1]: Reached target network-online.target - Network is Online. Apr 30 13:50:34.075692 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:50:34.081514 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Apr 30 13:50:34.121372 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Apr 30 13:50:34.951734 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:34.964910 (kubelet)[1614]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:50:34.986722 systemd-networkd[1453]: eth0: Ignoring DHCPv6 address 2a02:1348:179:846f:24:19ff:fee6:11be/128 (valid for 59min 59s, preferred for 59min 59s) which conflicts with 2a02:1348:179:846f:24:19ff:fee6:11be/64 assigned by NDisc. Apr 30 13:50:34.986736 systemd-networkd[1453]: eth0: Hint: use IPv6Token= setting to change the address generated by NDisc or set UseAutonomousPrefix=no. Apr 30 13:50:35.589020 kubelet[1614]: E0430 13:50:35.588927 1614 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:50:35.591075 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:50:35.591383 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:50:35.592399 systemd[1]: kubelet.service: Consumed 1.024s CPU time, 238.9M memory peak. Apr 30 13:50:36.726278 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Apr 30 13:50:36.739764 systemd[1]: Started sshd@0-10.230.17.190:22-139.178.89.65:37016.service - OpenSSH per-connection server daemon (139.178.89.65:37016). Apr 30 13:50:37.646268 sshd[1626]: Accepted publickey for core from 139.178.89.65 port 37016 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:50:37.649448 sshd-session[1626]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:50:37.669055 systemd-logind[1505]: New session 1 of user core. Apr 30 13:50:37.671363 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Apr 30 13:50:37.685722 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Apr 30 13:50:37.706680 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Apr 30 13:50:37.722014 systemd[1]: Starting user@500.service - User Manager for UID 500... Apr 30 13:50:37.728490 (systemd)[1630]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Apr 30 13:50:37.733080 systemd-logind[1505]: New session c1 of user core. Apr 30 13:50:37.924959 systemd[1630]: Queued start job for default target default.target. Apr 30 13:50:37.934115 systemd[1630]: Created slice app.slice - User Application Slice. Apr 30 13:50:37.934171 systemd[1630]: Reached target paths.target - Paths. Apr 30 13:50:37.934280 systemd[1630]: Reached target timers.target - Timers. Apr 30 13:50:37.936408 systemd[1630]: Starting dbus.socket - D-Bus User Message Bus Socket... Apr 30 13:50:37.957352 systemd[1630]: Listening on dbus.socket - D-Bus User Message Bus Socket. Apr 30 13:50:37.957540 systemd[1630]: Reached target sockets.target - Sockets. Apr 30 13:50:37.957627 systemd[1630]: Reached target basic.target - Basic System. Apr 30 13:50:37.957708 systemd[1630]: Reached target default.target - Main User Target. Apr 30 13:50:37.957775 systemd[1630]: Startup finished in 214ms. Apr 30 13:50:37.957804 systemd[1]: Started user@500.service - User Manager for UID 500. Apr 30 13:50:37.979728 systemd[1]: Started session-1.scope - Session 1 of User core. Apr 30 13:50:38.621091 systemd[1]: Started sshd@1-10.230.17.190:22-139.178.89.65:60640.service - OpenSSH per-connection server daemon (139.178.89.65:60640). Apr 30 13:50:39.124944 login[1595]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 13:50:39.129814 login[1594]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Apr 30 13:50:39.135537 systemd-logind[1505]: New session 2 of user core. Apr 30 13:50:39.146598 systemd[1]: Started session-2.scope - Session 2 of User core. Apr 30 13:50:39.151007 systemd-logind[1505]: New session 3 of user core. Apr 30 13:50:39.157541 systemd[1]: Started session-3.scope - Session 3 of User core. Apr 30 13:50:39.532728 sshd[1641]: Accepted publickey for core from 139.178.89.65 port 60640 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:50:39.535945 sshd-session[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:50:39.544495 systemd-logind[1505]: New session 4 of user core. Apr 30 13:50:39.556619 systemd[1]: Started session-4.scope - Session 4 of User core. Apr 30 13:50:40.145003 coreos-metadata[1496]: Apr 30 13:50:40.144 WARN failed to locate config-drive, using the metadata service API instead Apr 30 13:50:40.154280 sshd[1669]: Connection closed by 139.178.89.65 port 60640 Apr 30 13:50:40.153927 sshd-session[1641]: pam_unix(sshd:session): session closed for user core Apr 30 13:50:40.159654 systemd[1]: sshd@1-10.230.17.190:22-139.178.89.65:60640.service: Deactivated successfully. Apr 30 13:50:40.163223 systemd[1]: session-4.scope: Deactivated successfully. Apr 30 13:50:40.164869 systemd-logind[1505]: Session 4 logged out. Waiting for processes to exit. Apr 30 13:50:40.166685 systemd-logind[1505]: Removed session 4. Apr 30 13:50:40.183187 coreos-metadata[1496]: Apr 30 13:50:40.183 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Apr 30 13:50:40.188871 coreos-metadata[1496]: Apr 30 13:50:40.188 INFO Fetch failed with 404: resource not found Apr 30 13:50:40.188871 coreos-metadata[1496]: Apr 30 13:50:40.188 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Apr 30 13:50:40.189474 coreos-metadata[1496]: Apr 30 13:50:40.189 INFO Fetch successful Apr 30 13:50:40.189642 coreos-metadata[1496]: Apr 30 13:50:40.189 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Apr 30 13:50:40.244549 coreos-metadata[1496]: Apr 30 13:50:40.244 INFO Fetch successful Apr 30 13:50:40.244675 coreos-metadata[1496]: Apr 30 13:50:40.244 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Apr 30 13:50:40.256099 coreos-metadata[1496]: Apr 30 13:50:40.256 INFO Fetch successful Apr 30 13:50:40.256205 coreos-metadata[1496]: Apr 30 13:50:40.256 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Apr 30 13:50:40.275728 coreos-metadata[1496]: Apr 30 13:50:40.275 INFO Fetch successful Apr 30 13:50:40.275881 coreos-metadata[1496]: Apr 30 13:50:40.275 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Apr 30 13:50:40.296293 coreos-metadata[1496]: Apr 30 13:50:40.295 INFO Fetch successful Apr 30 13:50:40.324754 systemd[1]: Started sshd@2-10.230.17.190:22-139.178.89.65:60650.service - OpenSSH per-connection server daemon (139.178.89.65:60650). Apr 30 13:50:40.354285 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Apr 30 13:50:40.355261 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Apr 30 13:50:40.638272 coreos-metadata[1558]: Apr 30 13:50:40.637 WARN failed to locate config-drive, using the metadata service API instead Apr 30 13:50:40.659368 coreos-metadata[1558]: Apr 30 13:50:40.659 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Apr 30 13:50:40.688096 coreos-metadata[1558]: Apr 30 13:50:40.687 INFO Fetch successful Apr 30 13:50:40.688346 coreos-metadata[1558]: Apr 30 13:50:40.688 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Apr 30 13:50:40.715548 coreos-metadata[1558]: Apr 30 13:50:40.715 INFO Fetch successful Apr 30 13:50:40.717803 unknown[1558]: wrote ssh authorized keys file for user: core Apr 30 13:50:40.746474 update-ssh-keys[1686]: Updated "/home/core/.ssh/authorized_keys" Apr 30 13:50:40.747448 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Apr 30 13:50:40.749983 systemd[1]: Finished sshkeys.service. Apr 30 13:50:40.752989 systemd[1]: Reached target multi-user.target - Multi-User System. Apr 30 13:50:40.753212 systemd[1]: Startup finished in 1.323s (kernel) + 13.750s (initrd) + 12.060s (userspace) = 27.133s. Apr 30 13:50:41.227682 sshd[1679]: Accepted publickey for core from 139.178.89.65 port 60650 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:50:41.229791 sshd-session[1679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:50:41.241458 systemd-logind[1505]: New session 5 of user core. Apr 30 13:50:41.248484 systemd[1]: Started session-5.scope - Session 5 of User core. Apr 30 13:50:41.856339 sshd[1690]: Connection closed by 139.178.89.65 port 60650 Apr 30 13:50:41.857340 sshd-session[1679]: pam_unix(sshd:session): session closed for user core Apr 30 13:50:41.862336 systemd[1]: sshd@2-10.230.17.190:22-139.178.89.65:60650.service: Deactivated successfully. Apr 30 13:50:41.864985 systemd[1]: session-5.scope: Deactivated successfully. Apr 30 13:50:41.868142 systemd-logind[1505]: Session 5 logged out. Waiting for processes to exit. Apr 30 13:50:41.869796 systemd-logind[1505]: Removed session 5. Apr 30 13:50:45.728560 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Apr 30 13:50:45.742554 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:50:45.897142 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:45.913322 (kubelet)[1703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:50:45.970040 kubelet[1703]: E0430 13:50:45.969937 1703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:50:45.974995 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:50:45.975306 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:50:45.976189 systemd[1]: kubelet.service: Consumed 200ms CPU time, 97.5M memory peak. Apr 30 13:50:52.014691 systemd[1]: Started sshd@3-10.230.17.190:22-139.178.89.65:39540.service - OpenSSH per-connection server daemon (139.178.89.65:39540). Apr 30 13:50:52.910091 sshd[1711]: Accepted publickey for core from 139.178.89.65 port 39540 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:50:52.912178 sshd-session[1711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:50:52.921956 systemd-logind[1505]: New session 6 of user core. Apr 30 13:50:52.923534 systemd[1]: Started session-6.scope - Session 6 of User core. Apr 30 13:50:53.530823 sshd[1713]: Connection closed by 139.178.89.65 port 39540 Apr 30 13:50:53.531856 sshd-session[1711]: pam_unix(sshd:session): session closed for user core Apr 30 13:50:53.536289 systemd-logind[1505]: Session 6 logged out. Waiting for processes to exit. Apr 30 13:50:53.536650 systemd[1]: sshd@3-10.230.17.190:22-139.178.89.65:39540.service: Deactivated successfully. Apr 30 13:50:53.539145 systemd[1]: session-6.scope: Deactivated successfully. Apr 30 13:50:53.541329 systemd-logind[1505]: Removed session 6. Apr 30 13:50:53.690602 systemd[1]: Started sshd@4-10.230.17.190:22-139.178.89.65:39548.service - OpenSSH per-connection server daemon (139.178.89.65:39548). Apr 30 13:50:54.584931 sshd[1719]: Accepted publickey for core from 139.178.89.65 port 39548 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:50:54.586918 sshd-session[1719]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:50:54.594504 systemd-logind[1505]: New session 7 of user core. Apr 30 13:50:54.601576 systemd[1]: Started session-7.scope - Session 7 of User core. Apr 30 13:50:55.201389 sshd[1721]: Connection closed by 139.178.89.65 port 39548 Apr 30 13:50:55.202433 sshd-session[1719]: pam_unix(sshd:session): session closed for user core Apr 30 13:50:55.207817 systemd[1]: sshd@4-10.230.17.190:22-139.178.89.65:39548.service: Deactivated successfully. Apr 30 13:50:55.210517 systemd[1]: session-7.scope: Deactivated successfully. Apr 30 13:50:55.211966 systemd-logind[1505]: Session 7 logged out. Waiting for processes to exit. Apr 30 13:50:55.213700 systemd-logind[1505]: Removed session 7. Apr 30 13:50:55.363652 systemd[1]: Started sshd@5-10.230.17.190:22-139.178.89.65:39560.service - OpenSSH per-connection server daemon (139.178.89.65:39560). Apr 30 13:50:55.978283 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Apr 30 13:50:55.985538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:50:56.141536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:50:56.143744 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Apr 30 13:50:56.217565 kubelet[1737]: E0430 13:50:56.217442 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Apr 30 13:50:56.220083 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Apr 30 13:50:56.220368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Apr 30 13:50:56.221399 systemd[1]: kubelet.service: Consumed 201ms CPU time, 94.8M memory peak. Apr 30 13:50:56.267485 sshd[1727]: Accepted publickey for core from 139.178.89.65 port 39560 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:50:56.269528 sshd-session[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:50:56.277531 systemd-logind[1505]: New session 8 of user core. Apr 30 13:50:56.288656 systemd[1]: Started session-8.scope - Session 8 of User core. Apr 30 13:50:56.893739 sshd[1744]: Connection closed by 139.178.89.65 port 39560 Apr 30 13:50:56.894690 sshd-session[1727]: pam_unix(sshd:session): session closed for user core Apr 30 13:50:56.899616 systemd-logind[1505]: Session 8 logged out. Waiting for processes to exit. Apr 30 13:50:56.900941 systemd[1]: sshd@5-10.230.17.190:22-139.178.89.65:39560.service: Deactivated successfully. Apr 30 13:50:56.903411 systemd[1]: session-8.scope: Deactivated successfully. Apr 30 13:50:56.904886 systemd-logind[1505]: Removed session 8. Apr 30 13:50:57.050644 systemd[1]: Started sshd@6-10.230.17.190:22-139.178.89.65:59194.service - OpenSSH per-connection server daemon (139.178.89.65:59194). Apr 30 13:50:57.937550 sshd[1750]: Accepted publickey for core from 139.178.89.65 port 59194 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:50:57.939564 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:50:57.947542 systemd-logind[1505]: New session 9 of user core. Apr 30 13:50:57.961465 systemd[1]: Started session-9.scope - Session 9 of User core. Apr 30 13:50:58.424001 sudo[1753]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Apr 30 13:50:58.424488 sudo[1753]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:50:58.439418 sudo[1753]: pam_unix(sudo:session): session closed for user root Apr 30 13:50:58.582460 sshd[1752]: Connection closed by 139.178.89.65 port 59194 Apr 30 13:50:58.585590 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Apr 30 13:50:58.590621 systemd[1]: sshd@6-10.230.17.190:22-139.178.89.65:59194.service: Deactivated successfully. Apr 30 13:50:58.592947 systemd[1]: session-9.scope: Deactivated successfully. Apr 30 13:50:58.595149 systemd-logind[1505]: Session 9 logged out. Waiting for processes to exit. Apr 30 13:50:58.596663 systemd-logind[1505]: Removed session 9. Apr 30 13:50:58.747592 systemd[1]: Started sshd@7-10.230.17.190:22-139.178.89.65:59198.service - OpenSSH per-connection server daemon (139.178.89.65:59198). Apr 30 13:50:59.650269 sshd[1759]: Accepted publickey for core from 139.178.89.65 port 59198 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:50:59.652658 sshd-session[1759]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:50:59.662919 systemd-logind[1505]: New session 10 of user core. Apr 30 13:50:59.670491 systemd[1]: Started session-10.scope - Session 10 of User core. Apr 30 13:51:00.133057 sudo[1763]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Apr 30 13:51:00.133601 sudo[1763]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:51:00.140149 sudo[1763]: pam_unix(sudo:session): session closed for user root Apr 30 13:51:00.149116 sudo[1762]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Apr 30 13:51:00.149624 sudo[1762]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:51:00.169764 systemd[1]: Starting audit-rules.service - Load Audit Rules... Apr 30 13:51:00.221142 augenrules[1785]: No rules Apr 30 13:51:00.222067 systemd[1]: audit-rules.service: Deactivated successfully. Apr 30 13:51:00.222466 systemd[1]: Finished audit-rules.service - Load Audit Rules. Apr 30 13:51:00.224022 sudo[1762]: pam_unix(sudo:session): session closed for user root Apr 30 13:51:00.367793 sshd[1761]: Connection closed by 139.178.89.65 port 59198 Apr 30 13:51:00.368792 sshd-session[1759]: pam_unix(sshd:session): session closed for user core Apr 30 13:51:00.373436 systemd[1]: sshd@7-10.230.17.190:22-139.178.89.65:59198.service: Deactivated successfully. Apr 30 13:51:00.375962 systemd[1]: session-10.scope: Deactivated successfully. Apr 30 13:51:00.377003 systemd-logind[1505]: Session 10 logged out. Waiting for processes to exit. Apr 30 13:51:00.378548 systemd-logind[1505]: Removed session 10. Apr 30 13:51:00.527657 systemd[1]: Started sshd@8-10.230.17.190:22-139.178.89.65:59214.service - OpenSSH per-connection server daemon (139.178.89.65:59214). Apr 30 13:51:01.415582 sshd[1794]: Accepted publickey for core from 139.178.89.65 port 59214 ssh2: RSA SHA256:663EzGq9FXlnfWI8EpcEWCsUd/8VqK2+j0seg204/ow Apr 30 13:51:01.417630 sshd-session[1794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Apr 30 13:51:01.425641 systemd-logind[1505]: New session 11 of user core. Apr 30 13:51:01.432580 systemd[1]: Started session-11.scope - Session 11 of User core. Apr 30 13:51:01.901973 sudo[1797]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Apr 30 13:51:01.902470 sudo[1797]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Apr 30 13:51:02.617291 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:51:02.617833 systemd[1]: kubelet.service: Consumed 201ms CPU time, 94.8M memory peak. Apr 30 13:51:02.629596 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:51:02.678872 systemd[1]: Reload requested from client PID 1829 ('systemctl') (unit session-11.scope)... Apr 30 13:51:02.678918 systemd[1]: Reloading... Apr 30 13:51:02.846828 zram_generator::config[1878]: No configuration found. Apr 30 13:51:03.035454 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Apr 30 13:51:03.195042 systemd[1]: Reloading finished in 515 ms. Apr 30 13:51:03.264487 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:51:03.268371 (kubelet)[1932]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:51:03.276478 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:51:03.278539 systemd[1]: kubelet.service: Deactivated successfully. Apr 30 13:51:03.278942 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:51:03.279041 systemd[1]: kubelet.service: Consumed 134ms CPU time, 86.9M memory peak. Apr 30 13:51:03.287649 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Apr 30 13:51:03.432499 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Apr 30 13:51:03.442867 (kubelet)[1949]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Apr 30 13:51:03.547748 kubelet[1949]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:51:03.547748 kubelet[1949]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Apr 30 13:51:03.547748 kubelet[1949]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Apr 30 13:51:03.548709 kubelet[1949]: I0430 13:51:03.548628 1949 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Apr 30 13:51:03.948663 kubelet[1949]: I0430 13:51:03.948513 1949 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Apr 30 13:51:03.948663 kubelet[1949]: I0430 13:51:03.948558 1949 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Apr 30 13:51:03.948933 kubelet[1949]: I0430 13:51:03.948885 1949 server.go:929] "Client rotation is on, will bootstrap in background" Apr 30 13:51:03.977289 kubelet[1949]: I0430 13:51:03.976902 1949 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Apr 30 13:51:03.992363 kubelet[1949]: E0430 13:51:03.992309 1949 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Apr 30 13:51:03.992363 kubelet[1949]: I0430 13:51:03.992363 1949 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Apr 30 13:51:03.999544 kubelet[1949]: I0430 13:51:03.999289 1949 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Apr 30 13:51:04.000830 kubelet[1949]: I0430 13:51:04.000665 1949 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Apr 30 13:51:04.001052 kubelet[1949]: I0430 13:51:04.000983 1949 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Apr 30 13:51:04.001770 kubelet[1949]: I0430 13:51:04.001028 1949 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"10.230.17.190","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Apr 30 13:51:04.001770 kubelet[1949]: I0430 13:51:04.001360 1949 topology_manager.go:138] "Creating topology manager with none policy" Apr 30 13:51:04.001770 kubelet[1949]: I0430 13:51:04.001379 1949 container_manager_linux.go:300] "Creating device plugin manager" Apr 30 13:51:04.001770 kubelet[1949]: I0430 13:51:04.001562 1949 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:51:04.004596 kubelet[1949]: I0430 13:51:04.003028 1949 kubelet.go:408] "Attempting to sync node with API server" Apr 30 13:51:04.004596 kubelet[1949]: I0430 13:51:04.003063 1949 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Apr 30 13:51:04.004596 kubelet[1949]: I0430 13:51:04.003124 1949 kubelet.go:314] "Adding apiserver pod source" Apr 30 13:51:04.004596 kubelet[1949]: I0430 13:51:04.003163 1949 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Apr 30 13:51:04.004596 kubelet[1949]: E0430 13:51:04.003755 1949 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:04.004596 kubelet[1949]: E0430 13:51:04.003827 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:04.009302 kubelet[1949]: I0430 13:51:04.009272 1949 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Apr 30 13:51:04.011641 kubelet[1949]: I0430 13:51:04.011598 1949 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Apr 30 13:51:04.012520 kubelet[1949]: W0430 13:51:04.012198 1949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "10.230.17.190" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Apr 30 13:51:04.012520 kubelet[1949]: E0430 13:51:04.012318 1949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"10.230.17.190\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Apr 30 13:51:04.012520 kubelet[1949]: W0430 13:51:04.012441 1949 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Apr 30 13:51:04.012520 kubelet[1949]: E0430 13:51:04.012468 1949 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Apr 30 13:51:04.012791 kubelet[1949]: W0430 13:51:04.012768 1949 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Apr 30 13:51:04.013890 kubelet[1949]: I0430 13:51:04.013868 1949 server.go:1269] "Started kubelet" Apr 30 13:51:04.014942 kubelet[1949]: I0430 13:51:04.014573 1949 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Apr 30 13:51:04.016146 kubelet[1949]: I0430 13:51:04.016121 1949 server.go:460] "Adding debug handlers to kubelet server" Apr 30 13:51:04.019294 kubelet[1949]: I0430 13:51:04.018156 1949 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Apr 30 13:51:04.019294 kubelet[1949]: I0430 13:51:04.018589 1949 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Apr 30 13:51:04.020062 kubelet[1949]: I0430 13:51:04.019633 1949 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Apr 30 13:51:04.020875 kubelet[1949]: I0430 13:51:04.020814 1949 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Apr 30 13:51:04.027133 kubelet[1949]: E0430 13:51:04.027101 1949 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Apr 30 13:51:04.027441 kubelet[1949]: I0430 13:51:04.027419 1949 volume_manager.go:289] "Starting Kubelet Volume Manager" Apr 30 13:51:04.027585 kubelet[1949]: I0430 13:51:04.027553 1949 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Apr 30 13:51:04.027706 kubelet[1949]: I0430 13:51:04.027664 1949 reconciler.go:26] "Reconciler: start to sync state" Apr 30 13:51:04.035275 kubelet[1949]: E0430 13:51:04.034397 1949 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.17.190\" not found" Apr 30 13:51:04.035933 kubelet[1949]: I0430 13:51:04.035908 1949 factory.go:221] Registration of the systemd container factory successfully Apr 30 13:51:04.036520 kubelet[1949]: I0430 13:51:04.036469 1949 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Apr 30 13:51:04.037924 kubelet[1949]: E0430 13:51:04.028444 1949 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{10.230.17.190.183b1ce82ae4ebfd default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:10.230.17.190,UID:10.230.17.190,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:10.230.17.190,},FirstTimestamp:2025-04-30 13:51:04.013835261 +0000 UTC m=+0.565240809,LastTimestamp:2025-04-30 13:51:04.013835261 +0000 UTC m=+0.565240809,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:10.230.17.190,}" Apr 30 13:51:04.040531 kubelet[1949]: I0430 13:51:04.040457 1949 factory.go:221] Registration of the containerd container factory successfully Apr 30 13:51:04.071213 kubelet[1949]: E0430 13:51:04.068901 1949 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.230.17.190\" not found" node="10.230.17.190" Apr 30 13:51:04.077272 kubelet[1949]: I0430 13:51:04.074162 1949 cpu_manager.go:214] "Starting CPU manager" policy="none" Apr 30 13:51:04.077272 kubelet[1949]: I0430 13:51:04.074182 1949 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Apr 30 13:51:04.077272 kubelet[1949]: I0430 13:51:04.074214 1949 state_mem.go:36] "Initialized new in-memory state store" Apr 30 13:51:04.078861 kubelet[1949]: I0430 13:51:04.078835 1949 policy_none.go:49] "None policy: Start" Apr 30 13:51:04.081166 kubelet[1949]: I0430 13:51:04.081138 1949 memory_manager.go:170] "Starting memorymanager" policy="None" Apr 30 13:51:04.081234 kubelet[1949]: I0430 13:51:04.081180 1949 state_mem.go:35] "Initializing new in-memory state store" Apr 30 13:51:04.092388 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Apr 30 13:51:04.116560 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Apr 30 13:51:04.123297 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Apr 30 13:51:04.131686 kubelet[1949]: I0430 13:51:04.131607 1949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Apr 30 13:51:04.133787 kubelet[1949]: I0430 13:51:04.133761 1949 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Apr 30 13:51:04.134917 kubelet[1949]: E0430 13:51:04.134696 1949 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.17.190\" not found" Apr 30 13:51:04.134917 kubelet[1949]: I0430 13:51:04.134871 1949 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Apr 30 13:51:04.136074 kubelet[1949]: I0430 13:51:04.136008 1949 status_manager.go:217] "Starting to sync pod status with apiserver" Apr 30 13:51:04.136074 kubelet[1949]: I0430 13:51:04.136050 1949 kubelet.go:2321] "Starting kubelet main sync loop" Apr 30 13:51:04.136215 kubelet[1949]: E0430 13:51:04.136188 1949 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Apr 30 13:51:04.137439 kubelet[1949]: I0430 13:51:04.137348 1949 eviction_manager.go:189] "Eviction manager: starting control loop" Apr 30 13:51:04.137439 kubelet[1949]: I0430 13:51:04.137375 1949 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Apr 30 13:51:04.138584 kubelet[1949]: I0430 13:51:04.138560 1949 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Apr 30 13:51:04.142107 kubelet[1949]: E0430 13:51:04.141952 1949 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.230.17.190\" not found" Apr 30 13:51:04.240933 kubelet[1949]: I0430 13:51:04.240179 1949 kubelet_node_status.go:72] "Attempting to register node" node="10.230.17.190" Apr 30 13:51:04.249932 kubelet[1949]: I0430 13:51:04.249888 1949 kubelet_node_status.go:75] "Successfully registered node" node="10.230.17.190" Apr 30 13:51:04.250067 kubelet[1949]: E0430 13:51:04.249951 1949 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"10.230.17.190\": node \"10.230.17.190\" not found" Apr 30 13:51:04.263308 kubelet[1949]: E0430 13:51:04.263269 1949 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.17.190\" not found" Apr 30 13:51:04.364053 kubelet[1949]: E0430 13:51:04.363984 1949 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.17.190\" not found" Apr 30 13:51:04.464681 kubelet[1949]: E0430 13:51:04.464579 1949 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.17.190\" not found" Apr 30 13:51:04.498340 sudo[1797]: pam_unix(sudo:session): session closed for user root Apr 30 13:51:04.565392 kubelet[1949]: E0430 13:51:04.565307 1949 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.17.190\" not found" Apr 30 13:51:04.641505 sshd[1796]: Connection closed by 139.178.89.65 port 59214 Apr 30 13:51:04.642632 sshd-session[1794]: pam_unix(sshd:session): session closed for user core Apr 30 13:51:04.647932 systemd[1]: sshd@8-10.230.17.190:22-139.178.89.65:59214.service: Deactivated successfully. Apr 30 13:51:04.652486 systemd[1]: session-11.scope: Deactivated successfully. Apr 30 13:51:04.652946 systemd[1]: session-11.scope: Consumed 583ms CPU time, 73.8M memory peak. Apr 30 13:51:04.657324 systemd-logind[1505]: Session 11 logged out. Waiting for processes to exit. Apr 30 13:51:04.659044 systemd-logind[1505]: Removed session 11. Apr 30 13:51:04.666173 kubelet[1949]: E0430 13:51:04.666107 1949 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.17.190\" not found" Apr 30 13:51:04.767418 kubelet[1949]: E0430 13:51:04.766856 1949 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"10.230.17.190\" not found" Apr 30 13:51:04.869389 kubelet[1949]: I0430 13:51:04.869095 1949 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Apr 30 13:51:04.870305 containerd[1523]: time="2025-04-30T13:51:04.870089342Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Apr 30 13:51:04.871371 kubelet[1949]: I0430 13:51:04.871093 1949 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Apr 30 13:51:04.952463 kubelet[1949]: I0430 13:51:04.952372 1949 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Apr 30 13:51:04.952838 kubelet[1949]: W0430 13:51:04.952802 1949 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Apr 30 13:51:04.953012 kubelet[1949]: W0430 13:51:04.952804 1949 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Apr 30 13:51:04.953206 kubelet[1949]: W0430 13:51:04.953144 1949 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Apr 30 13:51:05.005254 kubelet[1949]: E0430 13:51:05.005192 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:05.005792 kubelet[1949]: I0430 13:51:05.005507 1949 apiserver.go:52] "Watching apiserver" Apr 30 13:51:05.010016 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Apr 30 13:51:05.021340 kubelet[1949]: E0430 13:51:05.020798 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:05.029259 kubelet[1949]: I0430 13:51:05.028993 1949 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Apr 30 13:51:05.031374 systemd[1]: Created slice kubepods-besteffort-pod8b648658_fe58_489a_977a_8b3ed5588e3a.slice - libcontainer container kubepods-besteffort-pod8b648658_fe58_489a_977a_8b3ed5588e3a.slice. Apr 30 13:51:05.036282 kubelet[1949]: I0430 13:51:05.036144 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6vqdn\" (UniqueName: \"kubernetes.io/projected/8b648658-fe58-489a-977a-8b3ed5588e3a-kube-api-access-6vqdn\") pod \"kube-proxy-58j24\" (UID: \"8b648658-fe58-489a-977a-8b3ed5588e3a\") " pod="kube-system/kube-proxy-58j24" Apr 30 13:51:05.036382 kubelet[1949]: I0430 13:51:05.036305 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-cni-net-dir\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.036382 kubelet[1949]: I0430 13:51:05.036338 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/2ac90e2f-0177-416e-9891-f89efa94c902-registration-dir\") pod \"csi-node-driver-zjldz\" (UID: \"2ac90e2f-0177-416e-9891-f89efa94c902\") " pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:05.036455 kubelet[1949]: I0430 13:51:05.036408 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c46q4\" (UniqueName: \"kubernetes.io/projected/2ac90e2f-0177-416e-9891-f89efa94c902-kube-api-access-c46q4\") pod \"csi-node-driver-zjldz\" (UID: \"2ac90e2f-0177-416e-9891-f89efa94c902\") " pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:05.036538 kubelet[1949]: I0430 13:51:05.036508 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8b648658-fe58-489a-977a-8b3ed5588e3a-xtables-lock\") pod \"kube-proxy-58j24\" (UID: \"8b648658-fe58-489a-977a-8b3ed5588e3a\") " pod="kube-system/kube-proxy-58j24" Apr 30 13:51:05.036583 kubelet[1949]: I0430 13:51:05.036542 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0fba226-593e-499e-bbee-f260db55854f-tigera-ca-bundle\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.036583 kubelet[1949]: I0430 13:51:05.036568 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-cni-bin-dir\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.036651 kubelet[1949]: I0430 13:51:05.036599 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-flexvol-driver-host\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.036651 kubelet[1949]: I0430 13:51:05.036627 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/2ac90e2f-0177-416e-9891-f89efa94c902-kubelet-dir\") pod \"csi-node-driver-zjldz\" (UID: \"2ac90e2f-0177-416e-9891-f89efa94c902\") " pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:05.037002 kubelet[1949]: I0430 13:51:05.036724 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-lib-modules\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.037002 kubelet[1949]: I0430 13:51:05.036782 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/d0fba226-593e-499e-bbee-f260db55854f-node-certs\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.037002 kubelet[1949]: I0430 13:51:05.036812 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-cni-log-dir\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.037002 kubelet[1949]: I0430 13:51:05.036856 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/8b648658-fe58-489a-977a-8b3ed5588e3a-kube-proxy\") pod \"kube-proxy-58j24\" (UID: \"8b648658-fe58-489a-977a-8b3ed5588e3a\") " pod="kube-system/kube-proxy-58j24" Apr 30 13:51:05.037002 kubelet[1949]: I0430 13:51:05.036884 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8b648658-fe58-489a-977a-8b3ed5588e3a-lib-modules\") pod \"kube-proxy-58j24\" (UID: \"8b648658-fe58-489a-977a-8b3ed5588e3a\") " pod="kube-system/kube-proxy-58j24" Apr 30 13:51:05.037382 kubelet[1949]: I0430 13:51:05.036928 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-xtables-lock\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.037382 kubelet[1949]: I0430 13:51:05.036956 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-policysync\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.037382 kubelet[1949]: I0430 13:51:05.036982 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-var-run-calico\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.037382 kubelet[1949]: I0430 13:51:05.037007 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0fba226-593e-499e-bbee-f260db55854f-var-lib-calico\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.037382 kubelet[1949]: I0430 13:51:05.037069 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-trfbf\" (UniqueName: \"kubernetes.io/projected/d0fba226-593e-499e-bbee-f260db55854f-kube-api-access-trfbf\") pod \"calico-node-mtbz5\" (UID: \"d0fba226-593e-499e-bbee-f260db55854f\") " pod="calico-system/calico-node-mtbz5" Apr 30 13:51:05.038552 kubelet[1949]: I0430 13:51:05.037096 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/2ac90e2f-0177-416e-9891-f89efa94c902-varrun\") pod \"csi-node-driver-zjldz\" (UID: \"2ac90e2f-0177-416e-9891-f89efa94c902\") " pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:05.038552 kubelet[1949]: I0430 13:51:05.037121 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/2ac90e2f-0177-416e-9891-f89efa94c902-socket-dir\") pod \"csi-node-driver-zjldz\" (UID: \"2ac90e2f-0177-416e-9891-f89efa94c902\") " pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:05.048150 systemd[1]: Created slice kubepods-besteffort-podd0fba226_593e_499e_bbee_f260db55854f.slice - libcontainer container kubepods-besteffort-podd0fba226_593e_499e_bbee_f260db55854f.slice. Apr 30 13:51:05.142506 kubelet[1949]: E0430 13:51:05.142451 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.142801 kubelet[1949]: W0430 13:51:05.142776 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.142968 kubelet[1949]: E0430 13:51:05.142940 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.143353 kubelet[1949]: E0430 13:51:05.143328 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.143439 kubelet[1949]: W0430 13:51:05.143353 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.143439 kubelet[1949]: E0430 13:51:05.143404 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.143941 kubelet[1949]: E0430 13:51:05.143784 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.143941 kubelet[1949]: W0430 13:51:05.143798 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.143941 kubelet[1949]: E0430 13:51:05.143837 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.144497 kubelet[1949]: E0430 13:51:05.144216 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.144497 kubelet[1949]: W0430 13:51:05.144237 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.144497 kubelet[1949]: E0430 13:51:05.144300 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.144690 kubelet[1949]: E0430 13:51:05.144652 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.144690 kubelet[1949]: W0430 13:51:05.144666 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.144811 kubelet[1949]: E0430 13:51:05.144724 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.146093 kubelet[1949]: E0430 13:51:05.145143 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.146093 kubelet[1949]: W0430 13:51:05.145165 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.146093 kubelet[1949]: E0430 13:51:05.145442 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.146093 kubelet[1949]: E0430 13:51:05.145530 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.146093 kubelet[1949]: W0430 13:51:05.145543 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.146093 kubelet[1949]: E0430 13:51:05.145889 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.146093 kubelet[1949]: W0430 13:51:05.145933 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.146093 kubelet[1949]: E0430 13:51:05.145952 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.146543 kubelet[1949]: E0430 13:51:05.146320 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.146543 kubelet[1949]: W0430 13:51:05.146360 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.146543 kubelet[1949]: E0430 13:51:05.146382 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.147438 kubelet[1949]: E0430 13:51:05.147011 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.147438 kubelet[1949]: W0430 13:51:05.147051 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.147438 kubelet[1949]: E0430 13:51:05.147069 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.147438 kubelet[1949]: E0430 13:51:05.147100 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.147687 kubelet[1949]: E0430 13:51:05.147526 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.147687 kubelet[1949]: W0430 13:51:05.147540 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.147687 kubelet[1949]: E0430 13:51:05.147555 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.148284 kubelet[1949]: E0430 13:51:05.147854 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.148284 kubelet[1949]: W0430 13:51:05.147873 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.148284 kubelet[1949]: E0430 13:51:05.147889 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.149391 kubelet[1949]: E0430 13:51:05.148302 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.149391 kubelet[1949]: W0430 13:51:05.148316 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.149391 kubelet[1949]: E0430 13:51:05.148331 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.164280 kubelet[1949]: E0430 13:51:05.159625 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.164280 kubelet[1949]: W0430 13:51:05.160508 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.164280 kubelet[1949]: E0430 13:51:05.160539 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.170811 kubelet[1949]: E0430 13:51:05.170775 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.171024 kubelet[1949]: W0430 13:51:05.170980 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.171179 kubelet[1949]: E0430 13:51:05.171149 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.184565 kubelet[1949]: E0430 13:51:05.184534 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.184768 kubelet[1949]: W0430 13:51:05.184732 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.184918 kubelet[1949]: E0430 13:51:05.184893 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.185518 kubelet[1949]: E0430 13:51:05.185497 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:05.185642 kubelet[1949]: W0430 13:51:05.185621 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:05.185851 kubelet[1949]: E0430 13:51:05.185828 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:05.342237 containerd[1523]: time="2025-04-30T13:51:05.341286536Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58j24,Uid:8b648658-fe58-489a-977a-8b3ed5588e3a,Namespace:kube-system,Attempt:0,}" Apr 30 13:51:05.355916 containerd[1523]: time="2025-04-30T13:51:05.355848879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtbz5,Uid:d0fba226-593e-499e-bbee-f260db55854f,Namespace:calico-system,Attempt:0,}" Apr 30 13:51:06.006583 kubelet[1949]: E0430 13:51:06.006504 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:06.125640 containerd[1523]: time="2025-04-30T13:51:06.125538778Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:51:06.127274 containerd[1523]: time="2025-04-30T13:51:06.126953817Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:51:06.128188 containerd[1523]: time="2025-04-30T13:51:06.128138044Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Apr 30 13:51:06.130149 containerd[1523]: time="2025-04-30T13:51:06.129714533Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Apr 30 13:51:06.132310 containerd[1523]: time="2025-04-30T13:51:06.130307512Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:51:06.149402 containerd[1523]: time="2025-04-30T13:51:06.149342561Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Apr 30 13:51:06.151121 containerd[1523]: time="2025-04-30T13:51:06.151079152Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 794.715107ms" Apr 30 13:51:06.154407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295133422.mount: Deactivated successfully. Apr 30 13:51:06.160124 containerd[1523]: time="2025-04-30T13:51:06.159752263Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 818.142209ms" Apr 30 13:51:06.321683 containerd[1523]: time="2025-04-30T13:51:06.320390391Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:51:06.321683 containerd[1523]: time="2025-04-30T13:51:06.321203360Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:51:06.321683 containerd[1523]: time="2025-04-30T13:51:06.321301457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:06.321683 containerd[1523]: time="2025-04-30T13:51:06.321235989Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:51:06.321683 containerd[1523]: time="2025-04-30T13:51:06.321365297Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:51:06.321683 containerd[1523]: time="2025-04-30T13:51:06.321404776Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:06.325231 containerd[1523]: time="2025-04-30T13:51:06.321687328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:06.329264 containerd[1523]: time="2025-04-30T13:51:06.326727807Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:06.448456 systemd[1]: Started cri-containerd-e0c1a3070c5958181100fb66f46a3be43eecbf6c66dd282ad39e40c56971b932.scope - libcontainer container e0c1a3070c5958181100fb66f46a3be43eecbf6c66dd282ad39e40c56971b932. Apr 30 13:51:06.450642 systemd[1]: Started cri-containerd-ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e.scope - libcontainer container ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e. Apr 30 13:51:06.500022 containerd[1523]: time="2025-04-30T13:51:06.499817991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-58j24,Uid:8b648658-fe58-489a-977a-8b3ed5588e3a,Namespace:kube-system,Attempt:0,} returns sandbox id \"e0c1a3070c5958181100fb66f46a3be43eecbf6c66dd282ad39e40c56971b932\"" Apr 30 13:51:06.506441 containerd[1523]: time="2025-04-30T13:51:06.506404035Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\"" Apr 30 13:51:06.515747 containerd[1523]: time="2025-04-30T13:51:06.515705547Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-mtbz5,Uid:d0fba226-593e-499e-bbee-f260db55854f,Namespace:calico-system,Attempt:0,} returns sandbox id \"ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e\"" Apr 30 13:51:07.006738 kubelet[1949]: E0430 13:51:07.006683 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:07.138374 kubelet[1949]: E0430 13:51:07.137737 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:07.993210 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2541297830.mount: Deactivated successfully. Apr 30 13:51:08.007729 kubelet[1949]: E0430 13:51:08.007629 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:08.739353 containerd[1523]: time="2025-04-30T13:51:08.739149849Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:08.743115 containerd[1523]: time="2025-04-30T13:51:08.743044186Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.8: active requests=0, bytes read=30354633" Apr 30 13:51:08.744017 containerd[1523]: time="2025-04-30T13:51:08.743981059Z" level=info msg="ImageCreate event name:\"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:08.750217 containerd[1523]: time="2025-04-30T13:51:08.748322716Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:08.750217 containerd[1523]: time="2025-04-30T13:51:08.749511357Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.8\" with image id \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.8\", repo digest \"registry.k8s.io/kube-proxy@sha256:dd0c9a37670f209947b1ed880f06a2e93e1d41da78c037f52f94b13858769838\", size \"30353644\" in 2.243055697s" Apr 30 13:51:08.750217 containerd[1523]: time="2025-04-30T13:51:08.749562079Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.8\" returns image reference \"sha256:7d73f013cedcf301aef42272c93e4c1174dab1a8eccd96840091ef04b63480f2\"" Apr 30 13:51:08.753013 containerd[1523]: time="2025-04-30T13:51:08.752965076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\"" Apr 30 13:51:08.759195 containerd[1523]: time="2025-04-30T13:51:08.759150831Z" level=info msg="CreateContainer within sandbox \"e0c1a3070c5958181100fb66f46a3be43eecbf6c66dd282ad39e40c56971b932\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Apr 30 13:51:08.786885 containerd[1523]: time="2025-04-30T13:51:08.786823005Z" level=info msg="CreateContainer within sandbox \"e0c1a3070c5958181100fb66f46a3be43eecbf6c66dd282ad39e40c56971b932\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"563178282ae58cb1f2c8da605ac500ff0d2a7165c6250060fb59f2cbd492893c\"" Apr 30 13:51:08.788306 containerd[1523]: time="2025-04-30T13:51:08.788151670Z" level=info msg="StartContainer for \"563178282ae58cb1f2c8da605ac500ff0d2a7165c6250060fb59f2cbd492893c\"" Apr 30 13:51:08.851561 systemd[1]: Started cri-containerd-563178282ae58cb1f2c8da605ac500ff0d2a7165c6250060fb59f2cbd492893c.scope - libcontainer container 563178282ae58cb1f2c8da605ac500ff0d2a7165c6250060fb59f2cbd492893c. Apr 30 13:51:08.898615 containerd[1523]: time="2025-04-30T13:51:08.898559619Z" level=info msg="StartContainer for \"563178282ae58cb1f2c8da605ac500ff0d2a7165c6250060fb59f2cbd492893c\" returns successfully" Apr 30 13:51:09.008704 kubelet[1949]: E0430 13:51:09.008141 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:09.138110 kubelet[1949]: E0430 13:51:09.137136 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:09.189425 kubelet[1949]: I0430 13:51:09.189233 1949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-58j24" podStartSLOduration=2.940683015 podStartE2EDuration="5.189173091s" podCreationTimestamp="2025-04-30 13:51:04 +0000 UTC" firstStartedPulling="2025-04-30 13:51:06.504073199 +0000 UTC m=+3.055478742" lastFinishedPulling="2025-04-30 13:51:08.752563275 +0000 UTC m=+5.303968818" observedRunningTime="2025-04-30 13:51:09.186850422 +0000 UTC m=+5.738255991" watchObservedRunningTime="2025-04-30 13:51:09.189173091 +0000 UTC m=+5.740578649" Apr 30 13:51:09.263102 kubelet[1949]: E0430 13:51:09.262749 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.263102 kubelet[1949]: W0430 13:51:09.262958 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.264479 kubelet[1949]: E0430 13:51:09.263795 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.264834 kubelet[1949]: E0430 13:51:09.264442 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.264834 kubelet[1949]: W0430 13:51:09.264732 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.264834 kubelet[1949]: E0430 13:51:09.264753 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.265576 kubelet[1949]: E0430 13:51:09.265374 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.265576 kubelet[1949]: W0430 13:51:09.265393 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.265576 kubelet[1949]: E0430 13:51:09.265413 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.266476 kubelet[1949]: E0430 13:51:09.266294 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.266476 kubelet[1949]: W0430 13:51:09.266313 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.266476 kubelet[1949]: E0430 13:51:09.266404 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.267698 kubelet[1949]: E0430 13:51:09.267477 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.267698 kubelet[1949]: W0430 13:51:09.267513 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.267698 kubelet[1949]: E0430 13:51:09.267531 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.268566 kubelet[1949]: E0430 13:51:09.268345 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.268566 kubelet[1949]: W0430 13:51:09.268364 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.268566 kubelet[1949]: E0430 13:51:09.268399 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.269291 kubelet[1949]: E0430 13:51:09.269022 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.269291 kubelet[1949]: W0430 13:51:09.269040 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.269291 kubelet[1949]: E0430 13:51:09.269056 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.270264 kubelet[1949]: E0430 13:51:09.269841 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.270264 kubelet[1949]: W0430 13:51:09.269860 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.270264 kubelet[1949]: E0430 13:51:09.269876 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.270979 kubelet[1949]: E0430 13:51:09.270378 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.270979 kubelet[1949]: W0430 13:51:09.270392 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.270979 kubelet[1949]: E0430 13:51:09.270408 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.271604 kubelet[1949]: E0430 13:51:09.271479 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.271604 kubelet[1949]: W0430 13:51:09.271498 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.271604 kubelet[1949]: E0430 13:51:09.271515 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.272315 kubelet[1949]: E0430 13:51:09.272118 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.272315 kubelet[1949]: W0430 13:51:09.272137 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.272315 kubelet[1949]: E0430 13:51:09.272154 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.272926 kubelet[1949]: E0430 13:51:09.272714 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.272926 kubelet[1949]: W0430 13:51:09.272757 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.272926 kubelet[1949]: E0430 13:51:09.272776 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.273544 kubelet[1949]: E0430 13:51:09.273385 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.273544 kubelet[1949]: W0430 13:51:09.273403 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.273544 kubelet[1949]: E0430 13:51:09.273419 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.274204 kubelet[1949]: E0430 13:51:09.273987 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.274204 kubelet[1949]: W0430 13:51:09.274005 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.274204 kubelet[1949]: E0430 13:51:09.274020 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.274970 kubelet[1949]: E0430 13:51:09.274868 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.274970 kubelet[1949]: W0430 13:51:09.274888 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.274970 kubelet[1949]: E0430 13:51:09.274904 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.277032 kubelet[1949]: E0430 13:51:09.276811 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.277032 kubelet[1949]: W0430 13:51:09.276829 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.277032 kubelet[1949]: E0430 13:51:09.276845 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.277539 kubelet[1949]: E0430 13:51:09.277394 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.277539 kubelet[1949]: W0430 13:51:09.277413 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.277539 kubelet[1949]: E0430 13:51:09.277429 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.278198 kubelet[1949]: E0430 13:51:09.277910 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.278198 kubelet[1949]: W0430 13:51:09.277957 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.278198 kubelet[1949]: E0430 13:51:09.277978 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.280352 kubelet[1949]: E0430 13:51:09.280209 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.280352 kubelet[1949]: W0430 13:51:09.280229 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.280352 kubelet[1949]: E0430 13:51:09.280260 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.281089 kubelet[1949]: E0430 13:51:09.280809 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.281089 kubelet[1949]: W0430 13:51:09.280828 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.281089 kubelet[1949]: E0430 13:51:09.280844 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.363124 kubelet[1949]: E0430 13:51:09.363046 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.363124 kubelet[1949]: W0430 13:51:09.363078 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.363825 kubelet[1949]: E0430 13:51:09.363219 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.364473 kubelet[1949]: E0430 13:51:09.364144 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.364473 kubelet[1949]: W0430 13:51:09.364164 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.364473 kubelet[1949]: E0430 13:51:09.364189 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.364981 kubelet[1949]: E0430 13:51:09.364871 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.364981 kubelet[1949]: W0430 13:51:09.364893 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.364981 kubelet[1949]: E0430 13:51:09.364917 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.365665 kubelet[1949]: E0430 13:51:09.365537 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.365665 kubelet[1949]: W0430 13:51:09.365591 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.365665 kubelet[1949]: E0430 13:51:09.365641 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.366405 kubelet[1949]: E0430 13:51:09.366190 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.366405 kubelet[1949]: W0430 13:51:09.366208 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.366405 kubelet[1949]: E0430 13:51:09.366232 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.366997 kubelet[1949]: E0430 13:51:09.366803 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.366997 kubelet[1949]: W0430 13:51:09.366821 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.366997 kubelet[1949]: E0430 13:51:09.366878 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.367719 kubelet[1949]: E0430 13:51:09.367490 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.367719 kubelet[1949]: W0430 13:51:09.367509 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.367719 kubelet[1949]: E0430 13:51:09.367622 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.368321 kubelet[1949]: E0430 13:51:09.368054 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.368321 kubelet[1949]: W0430 13:51:09.368068 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.368321 kubelet[1949]: E0430 13:51:09.368091 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.368890 kubelet[1949]: E0430 13:51:09.368785 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.368890 kubelet[1949]: W0430 13:51:09.368813 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.368890 kubelet[1949]: E0430 13:51:09.368837 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.369733 kubelet[1949]: E0430 13:51:09.369702 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.369886 kubelet[1949]: W0430 13:51:09.369854 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.370160 kubelet[1949]: E0430 13:51:09.370136 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.370581 kubelet[1949]: E0430 13:51:09.370558 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.370581 kubelet[1949]: W0430 13:51:09.370579 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.370905 kubelet[1949]: E0430 13:51:09.370604 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:09.371212 kubelet[1949]: E0430 13:51:09.371181 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:09.371212 kubelet[1949]: W0430 13:51:09.371201 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:09.371400 kubelet[1949]: E0430 13:51:09.371217 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.008964 kubelet[1949]: E0430 13:51:10.008899 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:10.149767 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1778711777.mount: Deactivated successfully. Apr 30 13:51:10.187469 kubelet[1949]: E0430 13:51:10.187401 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.189205 kubelet[1949]: W0430 13:51:10.188289 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.189205 kubelet[1949]: E0430 13:51:10.188332 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.189205 kubelet[1949]: E0430 13:51:10.188864 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.189205 kubelet[1949]: W0430 13:51:10.188879 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.189205 kubelet[1949]: E0430 13:51:10.188894 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.190680 kubelet[1949]: E0430 13:51:10.189642 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.190680 kubelet[1949]: W0430 13:51:10.189666 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.190680 kubelet[1949]: E0430 13:51:10.189685 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.191093 kubelet[1949]: E0430 13:51:10.190923 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.191093 kubelet[1949]: W0430 13:51:10.190941 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.191093 kubelet[1949]: E0430 13:51:10.190958 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.191409 kubelet[1949]: E0430 13:51:10.191389 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.191603 kubelet[1949]: W0430 13:51:10.191514 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.191603 kubelet[1949]: E0430 13:51:10.191541 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.192231 kubelet[1949]: E0430 13:51:10.192212 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.192364 kubelet[1949]: W0430 13:51:10.192343 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.192501 kubelet[1949]: E0430 13:51:10.192480 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.192943 kubelet[1949]: E0430 13:51:10.192924 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.193139 kubelet[1949]: W0430 13:51:10.193049 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.193139 kubelet[1949]: E0430 13:51:10.193076 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.193647 kubelet[1949]: E0430 13:51:10.193547 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.193647 kubelet[1949]: W0430 13:51:10.193565 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.193647 kubelet[1949]: E0430 13:51:10.193581 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.194466 kubelet[1949]: E0430 13:51:10.194294 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.194466 kubelet[1949]: W0430 13:51:10.194313 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.194466 kubelet[1949]: E0430 13:51:10.194331 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.195461 kubelet[1949]: E0430 13:51:10.195299 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.195461 kubelet[1949]: W0430 13:51:10.195319 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.195461 kubelet[1949]: E0430 13:51:10.195335 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.195813 kubelet[1949]: E0430 13:51:10.195630 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.195813 kubelet[1949]: W0430 13:51:10.195643 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.195813 kubelet[1949]: E0430 13:51:10.195659 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.196554 kubelet[1949]: E0430 13:51:10.196267 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.196554 kubelet[1949]: W0430 13:51:10.196286 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.196554 kubelet[1949]: E0430 13:51:10.196302 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.197188 kubelet[1949]: E0430 13:51:10.197168 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.197792 kubelet[1949]: W0430 13:51:10.197276 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.197792 kubelet[1949]: E0430 13:51:10.197727 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.198280 kubelet[1949]: E0430 13:51:10.198227 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.198413 kubelet[1949]: W0430 13:51:10.198392 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.198609 kubelet[1949]: E0430 13:51:10.198536 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.199360 kubelet[1949]: E0430 13:51:10.199235 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.199360 kubelet[1949]: W0430 13:51:10.199279 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.199360 kubelet[1949]: E0430 13:51:10.199296 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.200478 kubelet[1949]: E0430 13:51:10.200170 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.200478 kubelet[1949]: W0430 13:51:10.200187 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.200478 kubelet[1949]: E0430 13:51:10.200203 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.201094 kubelet[1949]: E0430 13:51:10.201004 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.201298 kubelet[1949]: W0430 13:51:10.201202 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.201298 kubelet[1949]: E0430 13:51:10.201228 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.201991 kubelet[1949]: E0430 13:51:10.201807 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.201991 kubelet[1949]: W0430 13:51:10.201826 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.201991 kubelet[1949]: E0430 13:51:10.201842 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.202748 kubelet[1949]: E0430 13:51:10.202557 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.202748 kubelet[1949]: W0430 13:51:10.202574 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.202748 kubelet[1949]: E0430 13:51:10.202591 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.203221 kubelet[1949]: E0430 13:51:10.203121 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.203221 kubelet[1949]: W0430 13:51:10.203143 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.203221 kubelet[1949]: E0430 13:51:10.203159 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.271874 kubelet[1949]: E0430 13:51:10.271406 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.271874 kubelet[1949]: W0430 13:51:10.271444 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.271874 kubelet[1949]: E0430 13:51:10.271485 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.273937 kubelet[1949]: E0430 13:51:10.272631 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.273937 kubelet[1949]: W0430 13:51:10.272668 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.273937 kubelet[1949]: E0430 13:51:10.272693 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.273937 kubelet[1949]: E0430 13:51:10.273376 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.273937 kubelet[1949]: W0430 13:51:10.273405 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.273937 kubelet[1949]: E0430 13:51:10.273431 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.274233 kubelet[1949]: E0430 13:51:10.274213 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.274233 kubelet[1949]: W0430 13:51:10.274231 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.275592 kubelet[1949]: E0430 13:51:10.274352 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.275592 kubelet[1949]: E0430 13:51:10.274686 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.275592 kubelet[1949]: W0430 13:51:10.274700 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.275592 kubelet[1949]: E0430 13:51:10.274835 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.275592 kubelet[1949]: E0430 13:51:10.275040 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.275592 kubelet[1949]: W0430 13:51:10.275053 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.275592 kubelet[1949]: E0430 13:51:10.275100 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.275926 kubelet[1949]: E0430 13:51:10.275737 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.275926 kubelet[1949]: W0430 13:51:10.275751 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.275926 kubelet[1949]: E0430 13:51:10.275772 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.276737 kubelet[1949]: E0430 13:51:10.276691 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.276737 kubelet[1949]: W0430 13:51:10.276726 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.277105 kubelet[1949]: E0430 13:51:10.277053 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.277375 kubelet[1949]: E0430 13:51:10.277352 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.277502 kubelet[1949]: W0430 13:51:10.277393 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.277502 kubelet[1949]: E0430 13:51:10.277411 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.278145 kubelet[1949]: E0430 13:51:10.278123 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.278145 kubelet[1949]: W0430 13:51:10.278144 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.278270 kubelet[1949]: E0430 13:51:10.278160 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.278795 kubelet[1949]: E0430 13:51:10.278771 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.278795 kubelet[1949]: W0430 13:51:10.278793 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.278926 kubelet[1949]: E0430 13:51:10.278810 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.279998 kubelet[1949]: E0430 13:51:10.279946 1949 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Apr 30 13:51:10.279998 kubelet[1949]: W0430 13:51:10.279986 1949 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Apr 30 13:51:10.279998 kubelet[1949]: E0430 13:51:10.280003 1949 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Apr 30 13:51:10.297424 containerd[1523]: time="2025-04-30T13:51:10.297349747Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:10.298657 containerd[1523]: time="2025-04-30T13:51:10.298592458Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3: active requests=0, bytes read=6859697" Apr 30 13:51:10.301888 containerd[1523]: time="2025-04-30T13:51:10.301829956Z" level=info msg="ImageCreate event name:\"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:10.305594 containerd[1523]: time="2025-04-30T13:51:10.305530976Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:10.306888 containerd[1523]: time="2025-04-30T13:51:10.306548052Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" with image id \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:eeaa2bb4f9b1aa61adde43ce6dea95eee89291f96963548e108d9a2dfbc5edd1\", size \"6859519\" in 1.553395484s" Apr 30 13:51:10.306888 containerd[1523]: time="2025-04-30T13:51:10.306595906Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.3\" returns image reference \"sha256:0ceddb3add2e9955cbb604f666245e259f30b1d6683c428f8748359e83d238a5\"" Apr 30 13:51:10.310353 containerd[1523]: time="2025-04-30T13:51:10.310312447Z" level=info msg="CreateContainer within sandbox \"ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Apr 30 13:51:10.326039 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1548054890.mount: Deactivated successfully. Apr 30 13:51:10.329864 containerd[1523]: time="2025-04-30T13:51:10.329804072Z" level=info msg="CreateContainer within sandbox \"ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02\"" Apr 30 13:51:10.332285 containerd[1523]: time="2025-04-30T13:51:10.330727198Z" level=info msg="StartContainer for \"c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02\"" Apr 30 13:51:10.372496 systemd[1]: Started cri-containerd-c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02.scope - libcontainer container c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02. Apr 30 13:51:10.413223 containerd[1523]: time="2025-04-30T13:51:10.413173936Z" level=info msg="StartContainer for \"c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02\" returns successfully" Apr 30 13:51:10.432102 systemd[1]: cri-containerd-c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02.scope: Deactivated successfully. Apr 30 13:51:10.730961 containerd[1523]: time="2025-04-30T13:51:10.730800127Z" level=info msg="shim disconnected" id=c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02 namespace=k8s.io Apr 30 13:51:10.730961 containerd[1523]: time="2025-04-30T13:51:10.730899715Z" level=warning msg="cleaning up after shim disconnected" id=c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02 namespace=k8s.io Apr 30 13:51:10.730961 containerd[1523]: time="2025-04-30T13:51:10.730922237Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:51:11.010826 kubelet[1949]: E0430 13:51:11.010568 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:11.091993 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8ca09a37274b5bb4a1bb363553ee7140f38d8639f1d86076855b2916a8b9e02-rootfs.mount: Deactivated successfully. Apr 30 13:51:11.137969 kubelet[1949]: E0430 13:51:11.137333 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:11.176517 containerd[1523]: time="2025-04-30T13:51:11.176217090Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\"" Apr 30 13:51:12.011837 kubelet[1949]: E0430 13:51:12.011734 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:13.013136 kubelet[1949]: E0430 13:51:13.012983 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:13.137523 kubelet[1949]: E0430 13:51:13.136913 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:14.013562 kubelet[1949]: E0430 13:51:14.013439 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:15.014783 kubelet[1949]: E0430 13:51:15.014354 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:15.139006 kubelet[1949]: E0430 13:51:15.137442 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:16.015115 kubelet[1949]: E0430 13:51:16.014858 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:16.916192 containerd[1523]: time="2025-04-30T13:51:16.914668273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:16.916192 containerd[1523]: time="2025-04-30T13:51:16.915938954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.3: active requests=0, bytes read=97793683" Apr 30 13:51:16.916192 containerd[1523]: time="2025-04-30T13:51:16.916086148Z" level=info msg="ImageCreate event name:\"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:16.920155 containerd[1523]: time="2025-04-30T13:51:16.920114137Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:16.921396 containerd[1523]: time="2025-04-30T13:51:16.921339375Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.3\" with image id \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:4505ec8f976470994b6a94295a4dabac0cb98375db050e959a22603e00ada90b\", size \"99286305\" in 5.745040244s" Apr 30 13:51:16.921493 containerd[1523]: time="2025-04-30T13:51:16.921410288Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.3\" returns image reference \"sha256:a140d04be1bc987bae0a1b9159e1dcb85751c448830efbdb3494207cf602b2d9\"" Apr 30 13:51:16.925868 containerd[1523]: time="2025-04-30T13:51:16.925828728Z" level=info msg="CreateContainer within sandbox \"ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Apr 30 13:51:16.944678 containerd[1523]: time="2025-04-30T13:51:16.944202697Z" level=info msg="CreateContainer within sandbox \"ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d\"" Apr 30 13:51:16.948865 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4061180222.mount: Deactivated successfully. Apr 30 13:51:16.951023 containerd[1523]: time="2025-04-30T13:51:16.950985669Z" level=info msg="StartContainer for \"1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d\"" Apr 30 13:51:17.003551 systemd[1]: Started cri-containerd-1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d.scope - libcontainer container 1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d. Apr 30 13:51:17.016057 kubelet[1949]: E0430 13:51:17.015795 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:17.048952 containerd[1523]: time="2025-04-30T13:51:17.048881903Z" level=info msg="StartContainer for \"1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d\" returns successfully" Apr 30 13:51:17.137397 kubelet[1949]: E0430 13:51:17.137275 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:17.892446 systemd[1]: cri-containerd-1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d.scope: Deactivated successfully. Apr 30 13:51:17.893212 systemd[1]: cri-containerd-1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d.scope: Consumed 597ms CPU time, 174.7M memory peak, 154M written to disk. Apr 30 13:51:17.926366 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d-rootfs.mount: Deactivated successfully. Apr 30 13:51:17.993837 kubelet[1949]: I0430 13:51:17.993791 1949 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Apr 30 13:51:18.016648 kubelet[1949]: E0430 13:51:18.016555 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:18.139093 containerd[1523]: time="2025-04-30T13:51:18.138257250Z" level=info msg="shim disconnected" id=1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d namespace=k8s.io Apr 30 13:51:18.139778 containerd[1523]: time="2025-04-30T13:51:18.139083876Z" level=warning msg="cleaning up after shim disconnected" id=1d6f30a037501c134616a65b4c0913980fedff696974e3d5a9c796fd030ffd2d namespace=k8s.io Apr 30 13:51:18.139778 containerd[1523]: time="2025-04-30T13:51:18.139138433Z" level=info msg="cleaning up dead shim" namespace=k8s.io Apr 30 13:51:18.201823 containerd[1523]: time="2025-04-30T13:51:18.201031581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\"" Apr 30 13:51:18.620601 update_engine[1506]: I20250430 13:51:18.620368 1506 update_attempter.cc:509] Updating boot flags... Apr 30 13:51:18.669502 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2489) Apr 30 13:51:18.797484 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (2492) Apr 30 13:51:19.017102 kubelet[1949]: E0430 13:51:19.017025 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:19.146428 systemd[1]: Created slice kubepods-besteffort-pod2ac90e2f_0177_416e_9891_f89efa94c902.slice - libcontainer container kubepods-besteffort-pod2ac90e2f_0177_416e_9891_f89efa94c902.slice. Apr 30 13:51:19.151071 containerd[1523]: time="2025-04-30T13:51:19.150945710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:0,}" Apr 30 13:51:19.239783 containerd[1523]: time="2025-04-30T13:51:19.239532426Z" level=error msg="Failed to destroy network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:19.240736 containerd[1523]: time="2025-04-30T13:51:19.240467228Z" level=error msg="encountered an error cleaning up failed sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:19.240736 containerd[1523]: time="2025-04-30T13:51:19.240598104Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:19.242344 kubelet[1949]: E0430 13:51:19.241055 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:19.242344 kubelet[1949]: E0430 13:51:19.241188 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:19.242344 kubelet[1949]: E0430 13:51:19.241232 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:19.242574 kubelet[1949]: E0430 13:51:19.241334 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:19.242788 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830-shm.mount: Deactivated successfully. Apr 30 13:51:20.018859 kubelet[1949]: E0430 13:51:20.017480 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:20.212323 kubelet[1949]: I0430 13:51:20.212057 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830" Apr 30 13:51:20.213453 containerd[1523]: time="2025-04-30T13:51:20.213389117Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:20.214769 containerd[1523]: time="2025-04-30T13:51:20.214727335Z" level=info msg="Ensure that sandbox 9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830 in task-service has been cleanup successfully" Apr 30 13:51:20.217769 containerd[1523]: time="2025-04-30T13:51:20.216110129Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:20.217769 containerd[1523]: time="2025-04-30T13:51:20.216139487Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:20.218572 containerd[1523]: time="2025-04-30T13:51:20.218533085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:1,}" Apr 30 13:51:20.220096 systemd[1]: run-netns-cni\x2d9a42d617\x2d5415\x2ded51\x2dc4aa\x2de907bc019778.mount: Deactivated successfully. Apr 30 13:51:20.358617 containerd[1523]: time="2025-04-30T13:51:20.358219360Z" level=error msg="Failed to destroy network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:20.362925 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981-shm.mount: Deactivated successfully. Apr 30 13:51:20.363824 containerd[1523]: time="2025-04-30T13:51:20.363777547Z" level=error msg="encountered an error cleaning up failed sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:20.363916 containerd[1523]: time="2025-04-30T13:51:20.363880237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:20.365271 kubelet[1949]: E0430 13:51:20.364869 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:20.365271 kubelet[1949]: E0430 13:51:20.365146 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:20.365271 kubelet[1949]: E0430 13:51:20.365182 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:20.365901 kubelet[1949]: E0430 13:51:20.365548 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:21.020280 kubelet[1949]: E0430 13:51:21.018071 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:21.225860 kubelet[1949]: I0430 13:51:21.225793 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981" Apr 30 13:51:21.228175 containerd[1523]: time="2025-04-30T13:51:21.228086364Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:51:21.229819 containerd[1523]: time="2025-04-30T13:51:21.229573037Z" level=info msg="Ensure that sandbox 3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981 in task-service has been cleanup successfully" Apr 30 13:51:21.230076 containerd[1523]: time="2025-04-30T13:51:21.230047328Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:51:21.230397 containerd[1523]: time="2025-04-30T13:51:21.230209402Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:51:21.235419 containerd[1523]: time="2025-04-30T13:51:21.233929516Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:21.235419 containerd[1523]: time="2025-04-30T13:51:21.234239001Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:21.235419 containerd[1523]: time="2025-04-30T13:51:21.234361774Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:21.235367 systemd[1]: run-netns-cni\x2d8e9e8d58\x2d43be\x2dc61d\x2defda\x2d4dde2d9ffc8a.mount: Deactivated successfully. Apr 30 13:51:21.239294 containerd[1523]: time="2025-04-30T13:51:21.238761748Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:2,}" Apr 30 13:51:21.398804 containerd[1523]: time="2025-04-30T13:51:21.398476895Z" level=error msg="Failed to destroy network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:21.402448 containerd[1523]: time="2025-04-30T13:51:21.402390967Z" level=error msg="encountered an error cleaning up failed sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:21.402693 containerd[1523]: time="2025-04-30T13:51:21.402644729Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:21.403755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf-shm.mount: Deactivated successfully. Apr 30 13:51:21.405365 kubelet[1949]: E0430 13:51:21.404681 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:21.405365 kubelet[1949]: E0430 13:51:21.404903 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:21.405365 kubelet[1949]: E0430 13:51:21.404980 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:21.405811 kubelet[1949]: E0430 13:51:21.405689 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:22.018434 kubelet[1949]: E0430 13:51:22.018348 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:22.234767 kubelet[1949]: I0430 13:51:22.234648 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf" Apr 30 13:51:22.236076 containerd[1523]: time="2025-04-30T13:51:22.236028886Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:51:22.237533 containerd[1523]: time="2025-04-30T13:51:22.237500704Z" level=info msg="Ensure that sandbox 41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf in task-service has been cleanup successfully" Apr 30 13:51:22.240354 containerd[1523]: time="2025-04-30T13:51:22.240323332Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:51:22.240532 containerd[1523]: time="2025-04-30T13:51:22.240473329Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:51:22.240566 systemd[1]: run-netns-cni\x2d41850b57\x2d38f8\x2de175\x2d5658\x2d5cc95be5dba0.mount: Deactivated successfully. Apr 30 13:51:22.241418 containerd[1523]: time="2025-04-30T13:51:22.241387632Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:51:22.241675 containerd[1523]: time="2025-04-30T13:51:22.241647785Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:51:22.242378 containerd[1523]: time="2025-04-30T13:51:22.242284355Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:51:22.243211 containerd[1523]: time="2025-04-30T13:51:22.242989052Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:22.243211 containerd[1523]: time="2025-04-30T13:51:22.243096651Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:22.243211 containerd[1523]: time="2025-04-30T13:51:22.243117036Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:22.245687 containerd[1523]: time="2025-04-30T13:51:22.244101074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:3,}" Apr 30 13:51:22.403811 containerd[1523]: time="2025-04-30T13:51:22.403629514Z" level=error msg="Failed to destroy network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:22.407321 containerd[1523]: time="2025-04-30T13:51:22.406784228Z" level=error msg="encountered an error cleaning up failed sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:22.407321 containerd[1523]: time="2025-04-30T13:51:22.406883782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:22.407728 kubelet[1949]: E0430 13:51:22.407204 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:22.408830 kubelet[1949]: E0430 13:51:22.407734 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:22.408830 kubelet[1949]: E0430 13:51:22.407772 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:22.408830 kubelet[1949]: E0430 13:51:22.407860 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:22.408568 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9-shm.mount: Deactivated successfully. Apr 30 13:51:23.019707 kubelet[1949]: E0430 13:51:23.019492 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:23.060912 systemd[1]: Created slice kubepods-besteffort-pod322528ec_993b_406b_907f_286ef9aa3c04.slice - libcontainer container kubepods-besteffort-pod322528ec_993b_406b_907f_286ef9aa3c04.slice. Apr 30 13:51:23.203215 kubelet[1949]: I0430 13:51:23.203144 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-whvbj\" (UniqueName: \"kubernetes.io/projected/322528ec-993b-406b-907f-286ef9aa3c04-kube-api-access-whvbj\") pod \"nginx-deployment-8587fbcb89-mmvl2\" (UID: \"322528ec-993b-406b-907f-286ef9aa3c04\") " pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:23.243912 kubelet[1949]: I0430 13:51:23.243621 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9" Apr 30 13:51:23.246280 containerd[1523]: time="2025-04-30T13:51:23.245532597Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:51:23.246280 containerd[1523]: time="2025-04-30T13:51:23.245883895Z" level=info msg="Ensure that sandbox d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9 in task-service has been cleanup successfully" Apr 30 13:51:23.248909 containerd[1523]: time="2025-04-30T13:51:23.248875713Z" level=info msg="TearDown network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" successfully" Apr 30 13:51:23.248909 containerd[1523]: time="2025-04-30T13:51:23.248905543Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" returns successfully" Apr 30 13:51:23.250345 containerd[1523]: time="2025-04-30T13:51:23.249406677Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:51:23.250345 containerd[1523]: time="2025-04-30T13:51:23.249527645Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:51:23.250345 containerd[1523]: time="2025-04-30T13:51:23.249547192Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:51:23.250673 containerd[1523]: time="2025-04-30T13:51:23.250447242Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:51:23.250673 containerd[1523]: time="2025-04-30T13:51:23.250577487Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:51:23.250673 containerd[1523]: time="2025-04-30T13:51:23.250598818Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:51:23.250955 systemd[1]: run-netns-cni\x2d14fdd3bb\x2dd12c\x2d306a\x2d3e9b\x2d45e5084a502e.mount: Deactivated successfully. Apr 30 13:51:23.255064 containerd[1523]: time="2025-04-30T13:51:23.254725323Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:23.255064 containerd[1523]: time="2025-04-30T13:51:23.254835311Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:23.255064 containerd[1523]: time="2025-04-30T13:51:23.254855390Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:23.256273 containerd[1523]: time="2025-04-30T13:51:23.255946727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:4,}" Apr 30 13:51:23.367897 containerd[1523]: time="2025-04-30T13:51:23.367706831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:0,}" Apr 30 13:51:23.429222 containerd[1523]: time="2025-04-30T13:51:23.428972304Z" level=error msg="Failed to destroy network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:23.430346 containerd[1523]: time="2025-04-30T13:51:23.430128123Z" level=error msg="encountered an error cleaning up failed sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:23.430346 containerd[1523]: time="2025-04-30T13:51:23.430228660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:23.430816 kubelet[1949]: E0430 13:51:23.430647 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:23.430816 kubelet[1949]: E0430 13:51:23.430771 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:23.430816 kubelet[1949]: E0430 13:51:23.430808 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:23.431516 kubelet[1949]: E0430 13:51:23.430927 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:23.538291 containerd[1523]: time="2025-04-30T13:51:23.537968896Z" level=error msg="Failed to destroy network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:23.539053 containerd[1523]: time="2025-04-30T13:51:23.538837647Z" level=error msg="encountered an error cleaning up failed sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:23.539053 containerd[1523]: time="2025-04-30T13:51:23.538924499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:23.539763 kubelet[1949]: E0430 13:51:23.539672 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:23.539866 kubelet[1949]: E0430 13:51:23.539794 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:23.539866 kubelet[1949]: E0430 13:51:23.539828 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:23.539994 kubelet[1949]: E0430 13:51:23.539897 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mmvl2" podUID="322528ec-993b-406b-907f-286ef9aa3c04" Apr 30 13:51:24.005654 kubelet[1949]: E0430 13:51:24.004314 1949 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:24.020742 kubelet[1949]: E0430 13:51:24.020133 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:24.258369 kubelet[1949]: I0430 13:51:24.257564 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160" Apr 30 13:51:24.260053 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74-shm.mount: Deactivated successfully. Apr 30 13:51:24.260279 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160-shm.mount: Deactivated successfully. Apr 30 13:51:24.268480 containerd[1523]: time="2025-04-30T13:51:24.266493202Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" Apr 30 13:51:24.268480 containerd[1523]: time="2025-04-30T13:51:24.267140208Z" level=info msg="Ensure that sandbox 614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160 in task-service has been cleanup successfully" Apr 30 13:51:24.272306 systemd[1]: run-netns-cni\x2dfccb6c69\x2debba\x2da4ad\x2db3a6\x2defbfb20de210.mount: Deactivated successfully. Apr 30 13:51:24.274169 containerd[1523]: time="2025-04-30T13:51:24.272821378Z" level=info msg="TearDown network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" successfully" Apr 30 13:51:24.274169 containerd[1523]: time="2025-04-30T13:51:24.272848417Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" returns successfully" Apr 30 13:51:24.276017 containerd[1523]: time="2025-04-30T13:51:24.274915876Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:51:24.276017 containerd[1523]: time="2025-04-30T13:51:24.275071837Z" level=info msg="TearDown network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" successfully" Apr 30 13:51:24.276017 containerd[1523]: time="2025-04-30T13:51:24.275091816Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" returns successfully" Apr 30 13:51:24.277110 kubelet[1949]: I0430 13:51:24.275278 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74" Apr 30 13:51:24.278474 containerd[1523]: time="2025-04-30T13:51:24.278178537Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" Apr 30 13:51:24.278880 containerd[1523]: time="2025-04-30T13:51:24.278842097Z" level=info msg="Ensure that sandbox 860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74 in task-service has been cleanup successfully" Apr 30 13:51:24.279356 containerd[1523]: time="2025-04-30T13:51:24.279238350Z" level=info msg="TearDown network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" successfully" Apr 30 13:51:24.279356 containerd[1523]: time="2025-04-30T13:51:24.279304203Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" returns successfully" Apr 30 13:51:24.279694 containerd[1523]: time="2025-04-30T13:51:24.279578333Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:51:24.280038 containerd[1523]: time="2025-04-30T13:51:24.279857839Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:51:24.280038 containerd[1523]: time="2025-04-30T13:51:24.279949993Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:51:24.284016 containerd[1523]: time="2025-04-30T13:51:24.283787911Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:51:24.284016 containerd[1523]: time="2025-04-30T13:51:24.283929378Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:51:24.284016 containerd[1523]: time="2025-04-30T13:51:24.283950191Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:51:24.284668 systemd[1]: run-netns-cni\x2d9518d535\x2db5a6\x2d667d\x2d3b99\x2d7006fd0610ef.mount: Deactivated successfully. Apr 30 13:51:24.285444 containerd[1523]: time="2025-04-30T13:51:24.285375494Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:24.286969 containerd[1523]: time="2025-04-30T13:51:24.286860178Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:24.286969 containerd[1523]: time="2025-04-30T13:51:24.286904990Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:24.287432 containerd[1523]: time="2025-04-30T13:51:24.286751427Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:1,}" Apr 30 13:51:24.289022 containerd[1523]: time="2025-04-30T13:51:24.288953408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:5,}" Apr 30 13:51:24.461513 containerd[1523]: time="2025-04-30T13:51:24.461335560Z" level=error msg="Failed to destroy network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:24.463134 containerd[1523]: time="2025-04-30T13:51:24.462029736Z" level=error msg="encountered an error cleaning up failed sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:24.463134 containerd[1523]: time="2025-04-30T13:51:24.462130602Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:24.463439 kubelet[1949]: E0430 13:51:24.462481 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:24.463439 kubelet[1949]: E0430 13:51:24.462607 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:24.463439 kubelet[1949]: E0430 13:51:24.462662 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:24.463634 kubelet[1949]: E0430 13:51:24.462750 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mmvl2" podUID="322528ec-993b-406b-907f-286ef9aa3c04" Apr 30 13:51:24.492365 containerd[1523]: time="2025-04-30T13:51:24.492284152Z" level=error msg="Failed to destroy network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:24.492828 containerd[1523]: time="2025-04-30T13:51:24.492783139Z" level=error msg="encountered an error cleaning up failed sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:24.492904 containerd[1523]: time="2025-04-30T13:51:24.492866750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:24.493565 kubelet[1949]: E0430 13:51:24.493479 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:24.494462 kubelet[1949]: E0430 13:51:24.493849 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:24.494462 kubelet[1949]: E0430 13:51:24.493975 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:24.494462 kubelet[1949]: E0430 13:51:24.494081 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:25.021560 kubelet[1949]: E0430 13:51:25.021469 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:25.250713 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334-shm.mount: Deactivated successfully. Apr 30 13:51:25.283846 kubelet[1949]: I0430 13:51:25.282445 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d" Apr 30 13:51:25.286969 containerd[1523]: time="2025-04-30T13:51:25.283650534Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\"" Apr 30 13:51:25.286969 containerd[1523]: time="2025-04-30T13:51:25.284160246Z" level=info msg="Ensure that sandbox 71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d in task-service has been cleanup successfully" Apr 30 13:51:25.286969 containerd[1523]: time="2025-04-30T13:51:25.285479967Z" level=info msg="TearDown network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" successfully" Apr 30 13:51:25.286969 containerd[1523]: time="2025-04-30T13:51:25.285504124Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" returns successfully" Apr 30 13:51:25.286925 systemd[1]: run-netns-cni\x2da7ad53fd\x2dbf7c\x2d6993\x2d7950\x2dbb32be5cbdfe.mount: Deactivated successfully. Apr 30 13:51:25.289023 containerd[1523]: time="2025-04-30T13:51:25.288495474Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" Apr 30 13:51:25.289023 containerd[1523]: time="2025-04-30T13:51:25.288629863Z" level=info msg="TearDown network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" successfully" Apr 30 13:51:25.289023 containerd[1523]: time="2025-04-30T13:51:25.288650649Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" returns successfully" Apr 30 13:51:25.289719 containerd[1523]: time="2025-04-30T13:51:25.289501377Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:51:25.289719 containerd[1523]: time="2025-04-30T13:51:25.289630365Z" level=info msg="TearDown network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" successfully" Apr 30 13:51:25.289719 containerd[1523]: time="2025-04-30T13:51:25.289651656Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" returns successfully" Apr 30 13:51:25.290953 kubelet[1949]: I0430 13:51:25.290890 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334" Apr 30 13:51:25.291952 containerd[1523]: time="2025-04-30T13:51:25.291916309Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\"" Apr 30 13:51:25.292361 containerd[1523]: time="2025-04-30T13:51:25.292331629Z" level=info msg="Ensure that sandbox 9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334 in task-service has been cleanup successfully" Apr 30 13:51:25.293188 containerd[1523]: time="2025-04-30T13:51:25.293159613Z" level=info msg="TearDown network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" successfully" Apr 30 13:51:25.293326 containerd[1523]: time="2025-04-30T13:51:25.293300150Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" returns successfully" Apr 30 13:51:25.293867 containerd[1523]: time="2025-04-30T13:51:25.293484288Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:51:25.293867 containerd[1523]: time="2025-04-30T13:51:25.293627679Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:51:25.293867 containerd[1523]: time="2025-04-30T13:51:25.293647520Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:51:25.296268 containerd[1523]: time="2025-04-30T13:51:25.295675179Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" Apr 30 13:51:25.296268 containerd[1523]: time="2025-04-30T13:51:25.295784741Z" level=info msg="TearDown network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" successfully" Apr 30 13:51:25.296268 containerd[1523]: time="2025-04-30T13:51:25.295803477Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" returns successfully" Apr 30 13:51:25.296268 containerd[1523]: time="2025-04-30T13:51:25.295891353Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:51:25.296268 containerd[1523]: time="2025-04-30T13:51:25.295985946Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:51:25.296268 containerd[1523]: time="2025-04-30T13:51:25.296003742Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:51:25.296638 systemd[1]: run-netns-cni\x2d73c7a1a5\x2d8755\x2d0508\x2d30f0\x2d4ca086648a7f.mount: Deactivated successfully. Apr 30 13:51:25.299227 containerd[1523]: time="2025-04-30T13:51:25.299189956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:2,}" Apr 30 13:51:25.303590 containerd[1523]: time="2025-04-30T13:51:25.303514119Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:25.303942 containerd[1523]: time="2025-04-30T13:51:25.303686832Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:25.303942 containerd[1523]: time="2025-04-30T13:51:25.303780376Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:25.311666 containerd[1523]: time="2025-04-30T13:51:25.311323305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:6,}" Apr 30 13:51:25.508662 containerd[1523]: time="2025-04-30T13:51:25.508476158Z" level=error msg="Failed to destroy network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:25.511263 containerd[1523]: time="2025-04-30T13:51:25.510089748Z" level=error msg="encountered an error cleaning up failed sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:25.511263 containerd[1523]: time="2025-04-30T13:51:25.510177639Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:25.512383 kubelet[1949]: E0430 13:51:25.511972 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:25.512383 kubelet[1949]: E0430 13:51:25.512089 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:25.512383 kubelet[1949]: E0430 13:51:25.512140 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:25.512694 kubelet[1949]: E0430 13:51:25.512298 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mmvl2" podUID="322528ec-993b-406b-907f-286ef9aa3c04" Apr 30 13:51:25.541891 containerd[1523]: time="2025-04-30T13:51:25.540685474Z" level=error msg="Failed to destroy network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:25.541891 containerd[1523]: time="2025-04-30T13:51:25.541659617Z" level=error msg="encountered an error cleaning up failed sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:25.541891 containerd[1523]: time="2025-04-30T13:51:25.541741734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:25.543403 kubelet[1949]: E0430 13:51:25.542806 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:25.543403 kubelet[1949]: E0430 13:51:25.542908 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:25.543403 kubelet[1949]: E0430 13:51:25.542955 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:25.543645 kubelet[1949]: E0430 13:51:25.543023 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:26.022392 kubelet[1949]: E0430 13:51:26.022158 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:26.250647 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4-shm.mount: Deactivated successfully. Apr 30 13:51:26.300364 kubelet[1949]: I0430 13:51:26.298746 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3" Apr 30 13:51:26.300927 containerd[1523]: time="2025-04-30T13:51:26.300608455Z" level=info msg="StopPodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\"" Apr 30 13:51:26.300927 containerd[1523]: time="2025-04-30T13:51:26.300909841Z" level=info msg="Ensure that sandbox 38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3 in task-service has been cleanup successfully" Apr 30 13:51:26.305542 containerd[1523]: time="2025-04-30T13:51:26.303330152Z" level=info msg="TearDown network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" successfully" Apr 30 13:51:26.305542 containerd[1523]: time="2025-04-30T13:51:26.303365317Z" level=info msg="StopPodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" returns successfully" Apr 30 13:51:26.304187 systemd[1]: run-netns-cni\x2d413f0d2d\x2de24b\x2d83a3\x2daca9\x2d6fa9e1d7441c.mount: Deactivated successfully. Apr 30 13:51:26.307504 containerd[1523]: time="2025-04-30T13:51:26.307457508Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\"" Apr 30 13:51:26.307766 containerd[1523]: time="2025-04-30T13:51:26.307738588Z" level=info msg="TearDown network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" successfully" Apr 30 13:51:26.307883 containerd[1523]: time="2025-04-30T13:51:26.307859529Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" returns successfully" Apr 30 13:51:26.310496 containerd[1523]: time="2025-04-30T13:51:26.310454975Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" Apr 30 13:51:26.310779 containerd[1523]: time="2025-04-30T13:51:26.310753028Z" level=info msg="TearDown network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" successfully" Apr 30 13:51:26.311067 containerd[1523]: time="2025-04-30T13:51:26.310871987Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" returns successfully" Apr 30 13:51:26.311644 containerd[1523]: time="2025-04-30T13:51:26.311613965Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:51:26.312009 containerd[1523]: time="2025-04-30T13:51:26.311826269Z" level=info msg="TearDown network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" successfully" Apr 30 13:51:26.312009 containerd[1523]: time="2025-04-30T13:51:26.311850785Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" returns successfully" Apr 30 13:51:26.312789 containerd[1523]: time="2025-04-30T13:51:26.312481951Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:51:26.312789 containerd[1523]: time="2025-04-30T13:51:26.312606065Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:51:26.312789 containerd[1523]: time="2025-04-30T13:51:26.312626725Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:51:26.314859 kubelet[1949]: I0430 13:51:26.313114 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4" Apr 30 13:51:26.315004 containerd[1523]: time="2025-04-30T13:51:26.314371955Z" level=info msg="StopPodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\"" Apr 30 13:51:26.315004 containerd[1523]: time="2025-04-30T13:51:26.314671681Z" level=info msg="Ensure that sandbox dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4 in task-service has been cleanup successfully" Apr 30 13:51:26.317957 systemd[1]: run-netns-cni\x2de0eed4dd\x2d93b3\x2dd0d2\x2dff6c\x2dff997eccdc90.mount: Deactivated successfully. Apr 30 13:51:26.318790 containerd[1523]: time="2025-04-30T13:51:26.318749401Z" level=info msg="TearDown network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" successfully" Apr 30 13:51:26.318918 containerd[1523]: time="2025-04-30T13:51:26.318893565Z" level=info msg="StopPodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" returns successfully" Apr 30 13:51:26.319206 containerd[1523]: time="2025-04-30T13:51:26.319175844Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:51:26.319971 containerd[1523]: time="2025-04-30T13:51:26.319944141Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:51:26.320090 containerd[1523]: time="2025-04-30T13:51:26.320066217Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:51:26.320756 containerd[1523]: time="2025-04-30T13:51:26.320727050Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\"" Apr 30 13:51:26.321171 containerd[1523]: time="2025-04-30T13:51:26.321145208Z" level=info msg="TearDown network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" successfully" Apr 30 13:51:26.322388 containerd[1523]: time="2025-04-30T13:51:26.322361917Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" returns successfully" Apr 30 13:51:26.322671 containerd[1523]: time="2025-04-30T13:51:26.322643033Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:26.322881 containerd[1523]: time="2025-04-30T13:51:26.322856311Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:26.322993 containerd[1523]: time="2025-04-30T13:51:26.322970291Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:26.324158 containerd[1523]: time="2025-04-30T13:51:26.323667487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:7,}" Apr 30 13:51:26.334856 containerd[1523]: time="2025-04-30T13:51:26.334793948Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" Apr 30 13:51:26.335224 containerd[1523]: time="2025-04-30T13:51:26.335195542Z" level=info msg="TearDown network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" successfully" Apr 30 13:51:26.335367 containerd[1523]: time="2025-04-30T13:51:26.335341444Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" returns successfully" Apr 30 13:51:26.355673 containerd[1523]: time="2025-04-30T13:51:26.355612950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:3,}" Apr 30 13:51:26.484630 containerd[1523]: time="2025-04-30T13:51:26.484455657Z" level=error msg="Failed to destroy network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:26.486214 containerd[1523]: time="2025-04-30T13:51:26.485969223Z" level=error msg="encountered an error cleaning up failed sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:26.486214 containerd[1523]: time="2025-04-30T13:51:26.486064436Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:26.486801 kubelet[1949]: E0430 13:51:26.486705 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:26.486939 kubelet[1949]: E0430 13:51:26.486829 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:26.486939 kubelet[1949]: E0430 13:51:26.486878 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:26.487041 kubelet[1949]: E0430 13:51:26.486967 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:26.526868 containerd[1523]: time="2025-04-30T13:51:26.526675944Z" level=error msg="Failed to destroy network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:26.528871 containerd[1523]: time="2025-04-30T13:51:26.528613333Z" level=error msg="encountered an error cleaning up failed sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:26.528871 containerd[1523]: time="2025-04-30T13:51:26.528716512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:26.529631 kubelet[1949]: E0430 13:51:26.529472 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:26.530164 kubelet[1949]: E0430 13:51:26.529822 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:26.530164 kubelet[1949]: E0430 13:51:26.530024 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:26.530958 kubelet[1949]: E0430 13:51:26.530132 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mmvl2" podUID="322528ec-993b-406b-907f-286ef9aa3c04" Apr 30 13:51:27.023026 kubelet[1949]: E0430 13:51:27.022627 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:27.253898 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1-shm.mount: Deactivated successfully. Apr 30 13:51:27.321530 kubelet[1949]: I0430 13:51:27.321347 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1" Apr 30 13:51:27.325755 containerd[1523]: time="2025-04-30T13:51:27.324080820Z" level=info msg="StopPodSandbox for \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\"" Apr 30 13:51:27.325755 containerd[1523]: time="2025-04-30T13:51:27.324607579Z" level=info msg="Ensure that sandbox 21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1 in task-service has been cleanup successfully" Apr 30 13:51:27.328664 containerd[1523]: time="2025-04-30T13:51:27.327643526Z" level=info msg="TearDown network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\" successfully" Apr 30 13:51:27.328664 containerd[1523]: time="2025-04-30T13:51:27.327684791Z" level=info msg="StopPodSandbox for \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\" returns successfully" Apr 30 13:51:27.328054 systemd[1]: run-netns-cni\x2d7bfb7ab2\x2d5056\x2dca17\x2d3cc7\x2dd6f7163bfb37.mount: Deactivated successfully. Apr 30 13:51:27.332787 containerd[1523]: time="2025-04-30T13:51:27.330913629Z" level=info msg="StopPodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\"" Apr 30 13:51:27.332787 containerd[1523]: time="2025-04-30T13:51:27.331060570Z" level=info msg="TearDown network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" successfully" Apr 30 13:51:27.332787 containerd[1523]: time="2025-04-30T13:51:27.331083277Z" level=info msg="StopPodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" returns successfully" Apr 30 13:51:27.332787 containerd[1523]: time="2025-04-30T13:51:27.331792846Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\"" Apr 30 13:51:27.332787 containerd[1523]: time="2025-04-30T13:51:27.331894343Z" level=info msg="TearDown network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" successfully" Apr 30 13:51:27.332787 containerd[1523]: time="2025-04-30T13:51:27.331912119Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" returns successfully" Apr 30 13:51:27.333980 containerd[1523]: time="2025-04-30T13:51:27.333687719Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" Apr 30 13:51:27.333980 containerd[1523]: time="2025-04-30T13:51:27.333803213Z" level=info msg="TearDown network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" successfully" Apr 30 13:51:27.333980 containerd[1523]: time="2025-04-30T13:51:27.333822285Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" returns successfully" Apr 30 13:51:27.335195 containerd[1523]: time="2025-04-30T13:51:27.334546085Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:51:27.335195 containerd[1523]: time="2025-04-30T13:51:27.334672474Z" level=info msg="TearDown network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" successfully" Apr 30 13:51:27.335195 containerd[1523]: time="2025-04-30T13:51:27.334691185Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" returns successfully" Apr 30 13:51:27.336546 containerd[1523]: time="2025-04-30T13:51:27.336510255Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:51:27.338095 containerd[1523]: time="2025-04-30T13:51:27.336724496Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:51:27.338095 containerd[1523]: time="2025-04-30T13:51:27.336794963Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:51:27.338405 kubelet[1949]: I0430 13:51:27.337076 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6" Apr 30 13:51:27.338495 containerd[1523]: time="2025-04-30T13:51:27.338318716Z" level=info msg="StopPodSandbox for \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\"" Apr 30 13:51:27.338837 containerd[1523]: time="2025-04-30T13:51:27.338639159Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:51:27.338837 containerd[1523]: time="2025-04-30T13:51:27.338708206Z" level=info msg="Ensure that sandbox 925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6 in task-service has been cleanup successfully" Apr 30 13:51:27.338837 containerd[1523]: time="2025-04-30T13:51:27.338748612Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:51:27.338837 containerd[1523]: time="2025-04-30T13:51:27.338768558Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:51:27.345391 containerd[1523]: time="2025-04-30T13:51:27.343433173Z" level=info msg="TearDown network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\" successfully" Apr 30 13:51:27.345391 containerd[1523]: time="2025-04-30T13:51:27.343559268Z" level=info msg="StopPodSandbox for \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\" returns successfully" Apr 30 13:51:27.346569 systemd[1]: run-netns-cni\x2d1fd8d528\x2d0b05\x2d482a\x2dc97c\x2db4001b4c9df8.mount: Deactivated successfully. Apr 30 13:51:27.350490 containerd[1523]: time="2025-04-30T13:51:27.350125291Z" level=info msg="StopPodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\"" Apr 30 13:51:27.350490 containerd[1523]: time="2025-04-30T13:51:27.350363650Z" level=info msg="TearDown network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" successfully" Apr 30 13:51:27.350490 containerd[1523]: time="2025-04-30T13:51:27.350385583Z" level=info msg="StopPodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" returns successfully" Apr 30 13:51:27.350731 containerd[1523]: time="2025-04-30T13:51:27.350125288Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:27.350731 containerd[1523]: time="2025-04-30T13:51:27.350636769Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:27.350731 containerd[1523]: time="2025-04-30T13:51:27.350655417Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:27.352793 containerd[1523]: time="2025-04-30T13:51:27.351559479Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\"" Apr 30 13:51:27.352793 containerd[1523]: time="2025-04-30T13:51:27.351684614Z" level=info msg="TearDown network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" successfully" Apr 30 13:51:27.352793 containerd[1523]: time="2025-04-30T13:51:27.351704404Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" returns successfully" Apr 30 13:51:27.352793 containerd[1523]: time="2025-04-30T13:51:27.351854771Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:8,}" Apr 30 13:51:27.353094 containerd[1523]: time="2025-04-30T13:51:27.352916046Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" Apr 30 13:51:27.353094 containerd[1523]: time="2025-04-30T13:51:27.353038730Z" level=info msg="TearDown network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" successfully" Apr 30 13:51:27.353094 containerd[1523]: time="2025-04-30T13:51:27.353058617Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" returns successfully" Apr 30 13:51:27.353815 containerd[1523]: time="2025-04-30T13:51:27.353709253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:4,}" Apr 30 13:51:27.527615 containerd[1523]: time="2025-04-30T13:51:27.526558949Z" level=error msg="Failed to destroy network for sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:27.528810 containerd[1523]: time="2025-04-30T13:51:27.528554518Z" level=error msg="encountered an error cleaning up failed sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:27.528810 containerd[1523]: time="2025-04-30T13:51:27.528699160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:8,} failed, error" error="failed to setup network for sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:27.528953 containerd[1523]: time="2025-04-30T13:51:27.528870619Z" level=error msg="Failed to destroy network for sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:27.529591 containerd[1523]: time="2025-04-30T13:51:27.529411381Z" level=error msg="encountered an error cleaning up failed sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:27.529591 containerd[1523]: time="2025-04-30T13:51:27.529487600Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:27.531240 kubelet[1949]: E0430 13:51:27.530792 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:27.531240 kubelet[1949]: E0430 13:51:27.530824 1949 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Apr 30 13:51:27.531240 kubelet[1949]: E0430 13:51:27.530896 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:27.531240 kubelet[1949]: E0430 13:51:27.530896 1949 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:27.531766 kubelet[1949]: E0430 13:51:27.530937 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-zjldz" Apr 30 13:51:27.531766 kubelet[1949]: E0430 13:51:27.530943 1949 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-mmvl2" Apr 30 13:51:27.531766 kubelet[1949]: E0430 13:51:27.531098 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-mmvl2_default(322528ec-993b-406b-907f-286ef9aa3c04)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-mmvl2" podUID="322528ec-993b-406b-907f-286ef9aa3c04" Apr 30 13:51:27.531952 kubelet[1949]: E0430 13:51:27.531152 1949 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-zjldz_calico-system(2ac90e2f-0177-416e-9891-f89efa94c902)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-zjldz" podUID="2ac90e2f-0177-416e-9891-f89efa94c902" Apr 30 13:51:27.744540 containerd[1523]: time="2025-04-30T13:51:27.744475156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:27.745656 containerd[1523]: time="2025-04-30T13:51:27.745597983Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.3: active requests=0, bytes read=144068748" Apr 30 13:51:27.747565 containerd[1523]: time="2025-04-30T13:51:27.746222140Z" level=info msg="ImageCreate event name:\"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:27.764605 containerd[1523]: time="2025-04-30T13:51:27.764561413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:27.765616 containerd[1523]: time="2025-04-30T13:51:27.765573832Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.3\" with image id \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node@sha256:750e267b4f8217e0ca9e4107228370190d1a2499b72112ad04370ab9b4553916\", size \"144068610\" in 9.564474062s" Apr 30 13:51:27.765742 containerd[1523]: time="2025-04-30T13:51:27.765639565Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.3\" returns image reference \"sha256:042163432abcec06b8077b24973b223a5f4cfdb35d85c3816f5d07a13d51afae\"" Apr 30 13:51:27.796156 containerd[1523]: time="2025-04-30T13:51:27.795983443Z" level=info msg="CreateContainer within sandbox \"ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Apr 30 13:51:27.815475 containerd[1523]: time="2025-04-30T13:51:27.815222200Z" level=info msg="CreateContainer within sandbox \"ecbd7aaf1945f40f5295deba34cc6655e6aba7646acde11eb1a8dc4f6b36444e\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"7c824f98ed91299b0f329cb08f2dac8c230b8178d823726d853b3857a2b35054\"" Apr 30 13:51:27.816570 containerd[1523]: time="2025-04-30T13:51:27.816536327Z" level=info msg="StartContainer for \"7c824f98ed91299b0f329cb08f2dac8c230b8178d823726d853b3857a2b35054\"" Apr 30 13:51:27.918559 systemd[1]: Started cri-containerd-7c824f98ed91299b0f329cb08f2dac8c230b8178d823726d853b3857a2b35054.scope - libcontainer container 7c824f98ed91299b0f329cb08f2dac8c230b8178d823726d853b3857a2b35054. Apr 30 13:51:28.010436 containerd[1523]: time="2025-04-30T13:51:28.010102149Z" level=info msg="StartContainer for \"7c824f98ed91299b0f329cb08f2dac8c230b8178d823726d853b3857a2b35054\" returns successfully" Apr 30 13:51:28.023998 kubelet[1949]: E0430 13:51:28.023905 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:28.074272 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Apr 30 13:51:28.074467 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Apr 30 13:51:28.254151 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81-shm.mount: Deactivated successfully. Apr 30 13:51:28.254359 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e-shm.mount: Deactivated successfully. Apr 30 13:51:28.254486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount870813094.mount: Deactivated successfully. Apr 30 13:51:28.345439 kubelet[1949]: I0430 13:51:28.345017 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e" Apr 30 13:51:28.347104 containerd[1523]: time="2025-04-30T13:51:28.347060648Z" level=info msg="StopPodSandbox for \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\"" Apr 30 13:51:28.348272 containerd[1523]: time="2025-04-30T13:51:28.348010996Z" level=info msg="Ensure that sandbox b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e in task-service has been cleanup successfully" Apr 30 13:51:28.350439 containerd[1523]: time="2025-04-30T13:51:28.350315440Z" level=info msg="TearDown network for sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\" successfully" Apr 30 13:51:28.350439 containerd[1523]: time="2025-04-30T13:51:28.350344599Z" level=info msg="StopPodSandbox for \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\" returns successfully" Apr 30 13:51:28.351061 systemd[1]: run-netns-cni\x2d85f33518\x2d6c91\x2d892f\x2d1da2\x2d07af0232b202.mount: Deactivated successfully. Apr 30 13:51:28.351563 containerd[1523]: time="2025-04-30T13:51:28.351275927Z" level=info msg="StopPodSandbox for \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\"" Apr 30 13:51:28.351563 containerd[1523]: time="2025-04-30T13:51:28.351403192Z" level=info msg="TearDown network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\" successfully" Apr 30 13:51:28.351563 containerd[1523]: time="2025-04-30T13:51:28.351422710Z" level=info msg="StopPodSandbox for \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\" returns successfully" Apr 30 13:51:28.352270 containerd[1523]: time="2025-04-30T13:51:28.352071509Z" level=info msg="StopPodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\"" Apr 30 13:51:28.353313 containerd[1523]: time="2025-04-30T13:51:28.352579694Z" level=info msg="TearDown network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" successfully" Apr 30 13:51:28.353313 containerd[1523]: time="2025-04-30T13:51:28.352674006Z" level=info msg="StopPodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" returns successfully" Apr 30 13:51:28.354444 containerd[1523]: time="2025-04-30T13:51:28.353998791Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\"" Apr 30 13:51:28.354444 containerd[1523]: time="2025-04-30T13:51:28.354104041Z" level=info msg="TearDown network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" successfully" Apr 30 13:51:28.354444 containerd[1523]: time="2025-04-30T13:51:28.354122339Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" returns successfully" Apr 30 13:51:28.355904 containerd[1523]: time="2025-04-30T13:51:28.355685510Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" Apr 30 13:51:28.355904 containerd[1523]: time="2025-04-30T13:51:28.355816068Z" level=info msg="TearDown network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" successfully" Apr 30 13:51:28.355904 containerd[1523]: time="2025-04-30T13:51:28.355836014Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" returns successfully" Apr 30 13:51:28.356682 kubelet[1949]: I0430 13:51:28.356279 1949 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81" Apr 30 13:51:28.356912 containerd[1523]: time="2025-04-30T13:51:28.356460662Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:51:28.356912 containerd[1523]: time="2025-04-30T13:51:28.356577606Z" level=info msg="TearDown network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" successfully" Apr 30 13:51:28.356912 containerd[1523]: time="2025-04-30T13:51:28.356598421Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" returns successfully" Apr 30 13:51:28.357851 containerd[1523]: time="2025-04-30T13:51:28.357181412Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:51:28.357851 containerd[1523]: time="2025-04-30T13:51:28.357303897Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:51:28.357851 containerd[1523]: time="2025-04-30T13:51:28.357322935Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:51:28.358720 containerd[1523]: time="2025-04-30T13:51:28.358403662Z" level=info msg="StopPodSandbox for \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\"" Apr 30 13:51:28.358842 containerd[1523]: time="2025-04-30T13:51:28.358709805Z" level=info msg="Ensure that sandbox a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81 in task-service has been cleanup successfully" Apr 30 13:51:28.359187 containerd[1523]: time="2025-04-30T13:51:28.359141079Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:51:28.359287 containerd[1523]: time="2025-04-30T13:51:28.359271364Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:51:28.359363 containerd[1523]: time="2025-04-30T13:51:28.359291911Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:51:28.363066 systemd[1]: run-netns-cni\x2dc4ceebc0\x2d8a19\x2db53a\x2d1d07\x2de2af40b8ab5f.mount: Deactivated successfully. Apr 30 13:51:28.364424 containerd[1523]: time="2025-04-30T13:51:28.363406635Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:51:28.364424 containerd[1523]: time="2025-04-30T13:51:28.363523120Z" level=info msg="TearDown network for sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\" successfully" Apr 30 13:51:28.364424 containerd[1523]: time="2025-04-30T13:51:28.363545851Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:51:28.364424 containerd[1523]: time="2025-04-30T13:51:28.363567893Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:51:28.364424 containerd[1523]: time="2025-04-30T13:51:28.363547776Z" level=info msg="StopPodSandbox for \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\" returns successfully" Apr 30 13:51:28.364680 containerd[1523]: time="2025-04-30T13:51:28.364610602Z" level=info msg="StopPodSandbox for \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\"" Apr 30 13:51:28.365967 containerd[1523]: time="2025-04-30T13:51:28.364792087Z" level=info msg="TearDown network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\" successfully" Apr 30 13:51:28.365967 containerd[1523]: time="2025-04-30T13:51:28.364818535Z" level=info msg="StopPodSandbox for \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\" returns successfully" Apr 30 13:51:28.365967 containerd[1523]: time="2025-04-30T13:51:28.364875411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:9,}" Apr 30 13:51:28.366636 containerd[1523]: time="2025-04-30T13:51:28.366355406Z" level=info msg="StopPodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\"" Apr 30 13:51:28.366636 containerd[1523]: time="2025-04-30T13:51:28.366458387Z" level=info msg="TearDown network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" successfully" Apr 30 13:51:28.366636 containerd[1523]: time="2025-04-30T13:51:28.366477556Z" level=info msg="StopPodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" returns successfully" Apr 30 13:51:28.367700 containerd[1523]: time="2025-04-30T13:51:28.367667809Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\"" Apr 30 13:51:28.367903 containerd[1523]: time="2025-04-30T13:51:28.367876377Z" level=info msg="TearDown network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" successfully" Apr 30 13:51:28.368067 containerd[1523]: time="2025-04-30T13:51:28.367991480Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" returns successfully" Apr 30 13:51:28.370475 containerd[1523]: time="2025-04-30T13:51:28.370274963Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" Apr 30 13:51:28.370475 containerd[1523]: time="2025-04-30T13:51:28.370379725Z" level=info msg="TearDown network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" successfully" Apr 30 13:51:28.370475 containerd[1523]: time="2025-04-30T13:51:28.370400697Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" returns successfully" Apr 30 13:51:28.371233 containerd[1523]: time="2025-04-30T13:51:28.371161852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:5,}" Apr 30 13:51:28.418364 kubelet[1949]: I0430 13:51:28.410055 1949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-mtbz5" podStartSLOduration=3.160309603 podStartE2EDuration="24.410005563s" podCreationTimestamp="2025-04-30 13:51:04 +0000 UTC" firstStartedPulling="2025-04-30 13:51:06.517186441 +0000 UTC m=+3.068591976" lastFinishedPulling="2025-04-30 13:51:27.766882393 +0000 UTC m=+24.318287936" observedRunningTime="2025-04-30 13:51:28.409746883 +0000 UTC m=+24.961152440" watchObservedRunningTime="2025-04-30 13:51:28.410005563 +0000 UTC m=+24.961411100" Apr 30 13:51:28.708566 systemd-networkd[1453]: calia8299a634ea: Link UP Apr 30 13:51:28.708960 systemd-networkd[1453]: calia8299a634ea: Gained carrier Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.512 [INFO][3007] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.562 [INFO][3007] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.17.190-k8s-csi--node--driver--zjldz-eth0 csi-node-driver- calico-system 2ac90e2f-0177-416e-9891-f89efa94c902 1089 0 2025-04-30 13:51:04 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:5bcd8f69 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 10.230.17.190 csi-node-driver-zjldz eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia8299a634ea [] []}} ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Namespace="calico-system" Pod="csi-node-driver-zjldz" WorkloadEndpoint="10.230.17.190-k8s-csi--node--driver--zjldz-" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.562 [INFO][3007] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Namespace="calico-system" Pod="csi-node-driver-zjldz" WorkloadEndpoint="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.619 [INFO][3050] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" HandleID="k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Workload="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.638 [INFO][3050] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" HandleID="k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Workload="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051ba0), Attrs:map[string]string{"namespace":"calico-system", "node":"10.230.17.190", "pod":"csi-node-driver-zjldz", "timestamp":"2025-04-30 13:51:28.619704808 +0000 UTC"}, Hostname:"10.230.17.190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.639 [INFO][3050] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.639 [INFO][3050] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.639 [INFO][3050] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.17.190' Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.644 [INFO][3050] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.654 [INFO][3050] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.661 [INFO][3050] ipam/ipam.go 489: Trying affinity for 192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.665 [INFO][3050] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.669 [INFO][3050] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.669 [INFO][3050] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.64/26 handle="k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.671 [INFO][3050] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4 Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.682 [INFO][3050] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.64/26 handle="k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.690 [INFO][3050] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.65/26] block=192.168.101.64/26 handle="k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.690 [INFO][3050] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.65/26] handle="k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" host="10.230.17.190" Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.690 [INFO][3050] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:51:28.727850 containerd[1523]: 2025-04-30 13:51:28.690 [INFO][3050] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.65/26] IPv6=[] ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" HandleID="k8s-pod-network.51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Workload="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" Apr 30 13:51:28.730156 containerd[1523]: 2025-04-30 13:51:28.693 [INFO][3007] cni-plugin/k8s.go 386: Populated endpoint ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Namespace="calico-system" Pod="csi-node-driver-zjldz" WorkloadEndpoint="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.17.190-k8s-csi--node--driver--zjldz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ac90e2f-0177-416e-9891-f89efa94c902", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.17.190", ContainerID:"", Pod:"csi-node-driver-zjldz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia8299a634ea", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:51:28.730156 containerd[1523]: 2025-04-30 13:51:28.694 [INFO][3007] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.65/32] ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Namespace="calico-system" Pod="csi-node-driver-zjldz" WorkloadEndpoint="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" Apr 30 13:51:28.730156 containerd[1523]: 2025-04-30 13:51:28.694 [INFO][3007] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia8299a634ea ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Namespace="calico-system" Pod="csi-node-driver-zjldz" WorkloadEndpoint="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" Apr 30 13:51:28.730156 containerd[1523]: 2025-04-30 13:51:28.711 [INFO][3007] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Namespace="calico-system" Pod="csi-node-driver-zjldz" WorkloadEndpoint="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" Apr 30 13:51:28.730156 containerd[1523]: 2025-04-30 13:51:28.711 [INFO][3007] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Namespace="calico-system" Pod="csi-node-driver-zjldz" WorkloadEndpoint="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.17.190-k8s-csi--node--driver--zjldz-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"2ac90e2f-0177-416e-9891-f89efa94c902", ResourceVersion:"1089", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 51, 4, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"5bcd8f69", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.17.190", ContainerID:"51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4", Pod:"csi-node-driver-zjldz", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.101.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia8299a634ea", MAC:"66:a9:dd:41:83:5d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:51:28.730156 containerd[1523]: 2025-04-30 13:51:28.725 [INFO][3007] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4" Namespace="calico-system" Pod="csi-node-driver-zjldz" WorkloadEndpoint="10.230.17.190-k8s-csi--node--driver--zjldz-eth0" Apr 30 13:51:28.768932 containerd[1523]: time="2025-04-30T13:51:28.768638685Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:51:28.769973 containerd[1523]: time="2025-04-30T13:51:28.769905087Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:51:28.770173 containerd[1523]: time="2025-04-30T13:51:28.769986486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:28.770343 containerd[1523]: time="2025-04-30T13:51:28.770188032Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:28.799466 systemd[1]: Started cri-containerd-51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4.scope - libcontainer container 51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4. Apr 30 13:51:28.804495 systemd-networkd[1453]: cali49825da8155: Link UP Apr 30 13:51:28.805621 systemd-networkd[1453]: cali49825da8155: Gained carrier Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.524 [INFO][3014] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.561 [INFO][3014] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0 nginx-deployment-8587fbcb89- default 322528ec-993b-406b-907f-286ef9aa3c04 1179 0 2025-04-30 13:51:23 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.17.190 nginx-deployment-8587fbcb89-mmvl2 eth0 default [] [] [kns.default ksa.default.default] cali49825da8155 [] []}} ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Namespace="default" Pod="nginx-deployment-8587fbcb89-mmvl2" WorkloadEndpoint="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.562 [INFO][3014] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Namespace="default" Pod="nginx-deployment-8587fbcb89-mmvl2" WorkloadEndpoint="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.627 [INFO][3048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" HandleID="k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Workload="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.642 [INFO][3048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" HandleID="k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Workload="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051cb0), Attrs:map[string]string{"namespace":"default", "node":"10.230.17.190", "pod":"nginx-deployment-8587fbcb89-mmvl2", "timestamp":"2025-04-30 13:51:28.627810156 +0000 UTC"}, Hostname:"10.230.17.190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.642 [INFO][3048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.690 [INFO][3048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.691 [INFO][3048] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.17.190' Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.747 [INFO][3048] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.754 [INFO][3048] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.763 [INFO][3048] ipam/ipam.go 489: Trying affinity for 192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.768 [INFO][3048] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.772 [INFO][3048] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.772 [INFO][3048] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.64/26 handle="k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.777 [INFO][3048] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.786 [INFO][3048] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.64/26 handle="k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.794 [INFO][3048] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.66/26] block=192.168.101.64/26 handle="k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.795 [INFO][3048] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.66/26] handle="k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" host="10.230.17.190" Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.795 [INFO][3048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:51:28.823558 containerd[1523]: 2025-04-30 13:51:28.795 [INFO][3048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.66/26] IPv6=[] ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" HandleID="k8s-pod-network.f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Workload="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" Apr 30 13:51:28.825120 containerd[1523]: 2025-04-30 13:51:28.798 [INFO][3014] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Namespace="default" Pod="nginx-deployment-8587fbcb89-mmvl2" WorkloadEndpoint="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"322528ec-993b-406b-907f-286ef9aa3c04", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.17.190", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-mmvl2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali49825da8155", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:51:28.825120 containerd[1523]: 2025-04-30 13:51:28.798 [INFO][3014] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.66/32] ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Namespace="default" Pod="nginx-deployment-8587fbcb89-mmvl2" WorkloadEndpoint="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" Apr 30 13:51:28.825120 containerd[1523]: 2025-04-30 13:51:28.798 [INFO][3014] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali49825da8155 ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Namespace="default" Pod="nginx-deployment-8587fbcb89-mmvl2" WorkloadEndpoint="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" Apr 30 13:51:28.825120 containerd[1523]: 2025-04-30 13:51:28.807 [INFO][3014] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Namespace="default" Pod="nginx-deployment-8587fbcb89-mmvl2" WorkloadEndpoint="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" Apr 30 13:51:28.825120 containerd[1523]: 2025-04-30 13:51:28.810 [INFO][3014] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Namespace="default" Pod="nginx-deployment-8587fbcb89-mmvl2" WorkloadEndpoint="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"322528ec-993b-406b-907f-286ef9aa3c04", ResourceVersion:"1179", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 51, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.17.190", ContainerID:"f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a", Pod:"nginx-deployment-8587fbcb89-mmvl2", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali49825da8155", MAC:"76:e9:0d:79:2b:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:51:28.825120 containerd[1523]: 2025-04-30 13:51:28.821 [INFO][3014] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a" Namespace="default" Pod="nginx-deployment-8587fbcb89-mmvl2" WorkloadEndpoint="10.230.17.190-k8s-nginx--deployment--8587fbcb89--mmvl2-eth0" Apr 30 13:51:28.856908 containerd[1523]: time="2025-04-30T13:51:28.856742904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-zjldz,Uid:2ac90e2f-0177-416e-9891-f89efa94c902,Namespace:calico-system,Attempt:9,} returns sandbox id \"51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4\"" Apr 30 13:51:28.859502 containerd[1523]: time="2025-04-30T13:51:28.859470729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\"" Apr 30 13:51:28.869846 containerd[1523]: time="2025-04-30T13:51:28.869706601Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:51:28.870084 containerd[1523]: time="2025-04-30T13:51:28.869806117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:51:28.870084 containerd[1523]: time="2025-04-30T13:51:28.869826055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:28.870383 containerd[1523]: time="2025-04-30T13:51:28.870215448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:28.899490 systemd[1]: Started cri-containerd-f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a.scope - libcontainer container f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a. Apr 30 13:51:28.961790 containerd[1523]: time="2025-04-30T13:51:28.961579790Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-mmvl2,Uid:322528ec-993b-406b-907f-286ef9aa3c04,Namespace:default,Attempt:5,} returns sandbox id \"f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a\"" Apr 30 13:51:29.025041 kubelet[1949]: E0430 13:51:29.024944 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:29.408710 systemd[1]: run-containerd-runc-k8s.io-7c824f98ed91299b0f329cb08f2dac8c230b8178d823726d853b3857a2b35054-runc.WJBjgv.mount: Deactivated successfully. Apr 30 13:51:29.867521 systemd-networkd[1453]: cali49825da8155: Gained IPv6LL Apr 30 13:51:29.902283 kernel: bpftool[3303]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Apr 30 13:51:30.025948 kubelet[1949]: E0430 13:51:30.025854 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:30.221397 systemd-networkd[1453]: vxlan.calico: Link UP Apr 30 13:51:30.221921 systemd-networkd[1453]: vxlan.calico: Gained carrier Apr 30 13:51:30.251485 systemd-networkd[1453]: calia8299a634ea: Gained IPv6LL Apr 30 13:51:30.751843 containerd[1523]: time="2025-04-30T13:51:30.751427588Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:30.757379 containerd[1523]: time="2025-04-30T13:51:30.757294562Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.3: active requests=0, bytes read=7912898" Apr 30 13:51:30.758818 containerd[1523]: time="2025-04-30T13:51:30.758742237Z" level=info msg="ImageCreate event name:\"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:30.763344 containerd[1523]: time="2025-04-30T13:51:30.762961958Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:30.764136 containerd[1523]: time="2025-04-30T13:51:30.764089929Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.3\" with image id \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:72455a36febc7c56ec8881007f4805caed5764026a0694e4f86a2503209b2d31\", size \"9405520\" in 1.904424963s" Apr 30 13:51:30.764220 containerd[1523]: time="2025-04-30T13:51:30.764145366Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.3\" returns image reference \"sha256:4c37db5645f4075f8b8170eea8f14e340cb13550e0a392962f1f211ded741505\"" Apr 30 13:51:30.771047 containerd[1523]: time="2025-04-30T13:51:30.770139291Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 30 13:51:30.771500 containerd[1523]: time="2025-04-30T13:51:30.771467596Z" level=info msg="CreateContainer within sandbox \"51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Apr 30 13:51:30.796790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2448334154.mount: Deactivated successfully. Apr 30 13:51:30.806275 containerd[1523]: time="2025-04-30T13:51:30.803788666Z" level=info msg="CreateContainer within sandbox \"51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"aa47c3a1dc6d66b4b3150056b9ea34ab9447278e4f747bd7271172caf9b0d4bf\"" Apr 30 13:51:30.807300 containerd[1523]: time="2025-04-30T13:51:30.807223526Z" level=info msg="StartContainer for \"aa47c3a1dc6d66b4b3150056b9ea34ab9447278e4f747bd7271172caf9b0d4bf\"" Apr 30 13:51:30.860704 systemd[1]: Started cri-containerd-aa47c3a1dc6d66b4b3150056b9ea34ab9447278e4f747bd7271172caf9b0d4bf.scope - libcontainer container aa47c3a1dc6d66b4b3150056b9ea34ab9447278e4f747bd7271172caf9b0d4bf. Apr 30 13:51:30.912342 containerd[1523]: time="2025-04-30T13:51:30.912271302Z" level=info msg="StartContainer for \"aa47c3a1dc6d66b4b3150056b9ea34ab9447278e4f747bd7271172caf9b0d4bf\" returns successfully" Apr 30 13:51:31.026528 kubelet[1949]: E0430 13:51:31.026288 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:31.978636 systemd-networkd[1453]: vxlan.calico: Gained IPv6LL Apr 30 13:51:32.026793 kubelet[1949]: E0430 13:51:32.026654 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:33.027861 kubelet[1949]: E0430 13:51:33.027160 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:34.027920 kubelet[1949]: E0430 13:51:34.027487 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:35.028313 kubelet[1949]: E0430 13:51:35.028137 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:35.387545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4211558406.mount: Deactivated successfully. Apr 30 13:51:36.029271 kubelet[1949]: E0430 13:51:36.029153 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:37.030183 kubelet[1949]: E0430 13:51:37.030110 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:37.219143 containerd[1523]: time="2025-04-30T13:51:37.217363097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:37.219143 containerd[1523]: time="2025-04-30T13:51:37.218624875Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306276" Apr 30 13:51:37.219143 containerd[1523]: time="2025-04-30T13:51:37.219025197Z" level=info msg="ImageCreate event name:\"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:37.222878 containerd[1523]: time="2025-04-30T13:51:37.222845757Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:37.225272 containerd[1523]: time="2025-04-30T13:51:37.224494501Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\", size \"73306154\" in 6.454295983s" Apr 30 13:51:37.225272 containerd[1523]: time="2025-04-30T13:51:37.224549062Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\"" Apr 30 13:51:37.231288 containerd[1523]: time="2025-04-30T13:51:37.230323446Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\"" Apr 30 13:51:37.242793 containerd[1523]: time="2025-04-30T13:51:37.242577728Z" level=info msg="CreateContainer within sandbox \"f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Apr 30 13:51:37.264027 containerd[1523]: time="2025-04-30T13:51:37.263958426Z" level=info msg="CreateContainer within sandbox \"f20989d8902c12cef388ee0dba8f99715b16a8ba94c7d9c21144eb8173e9d46a\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"ea086eb653165480339562ac8b1145eeb0aa4302a0ee3efece877458b58a08e7\"" Apr 30 13:51:37.267684 containerd[1523]: time="2025-04-30T13:51:37.267637077Z" level=info msg="StartContainer for \"ea086eb653165480339562ac8b1145eeb0aa4302a0ee3efece877458b58a08e7\"" Apr 30 13:51:37.322691 systemd[1]: Started cri-containerd-ea086eb653165480339562ac8b1145eeb0aa4302a0ee3efece877458b58a08e7.scope - libcontainer container ea086eb653165480339562ac8b1145eeb0aa4302a0ee3efece877458b58a08e7. Apr 30 13:51:37.382824 containerd[1523]: time="2025-04-30T13:51:37.382744028Z" level=info msg="StartContainer for \"ea086eb653165480339562ac8b1145eeb0aa4302a0ee3efece877458b58a08e7\" returns successfully" Apr 30 13:51:37.445577 kubelet[1949]: I0430 13:51:37.445377 1949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-mmvl2" podStartSLOduration=6.1802082 podStartE2EDuration="14.44532363s" podCreationTimestamp="2025-04-30 13:51:23 +0000 UTC" firstStartedPulling="2025-04-30 13:51:28.964199926 +0000 UTC m=+25.515605463" lastFinishedPulling="2025-04-30 13:51:37.229315353 +0000 UTC m=+33.780720893" observedRunningTime="2025-04-30 13:51:37.444959383 +0000 UTC m=+33.996364931" watchObservedRunningTime="2025-04-30 13:51:37.44532363 +0000 UTC m=+33.996729174" Apr 30 13:51:38.030610 kubelet[1949]: E0430 13:51:38.030532 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:39.031866 kubelet[1949]: E0430 13:51:39.031717 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:39.140327 containerd[1523]: time="2025-04-30T13:51:39.138343424Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:39.141566 containerd[1523]: time="2025-04-30T13:51:39.141481535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3: active requests=0, bytes read=13991773" Apr 30 13:51:39.144023 containerd[1523]: time="2025-04-30T13:51:39.143957153Z" level=info msg="ImageCreate event name:\"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:39.148936 containerd[1523]: time="2025-04-30T13:51:39.148319579Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:39.148936 containerd[1523]: time="2025-04-30T13:51:39.148552793Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" with image id \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:3f15090a9bb45773d1fd019455ec3d3f3746f3287c35d8013e497b38d8237324\", size \"15484347\" in 1.918181789s" Apr 30 13:51:39.148936 containerd[1523]: time="2025-04-30T13:51:39.148622287Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.3\" returns image reference \"sha256:e909e2ccf54404290b577fbddd190d036984deed184001767f820b0dddf77fd9\"" Apr 30 13:51:39.153931 containerd[1523]: time="2025-04-30T13:51:39.153873564Z" level=info msg="CreateContainer within sandbox \"51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Apr 30 13:51:39.178286 containerd[1523]: time="2025-04-30T13:51:39.178165213Z" level=info msg="CreateContainer within sandbox \"51b21953412233efe99616c8612c5768000b8943ee2e7cd7df6521a4197112f4\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a29baf20f1172291ea6b12072404cd7736b9f5c22a1ae3837ec1dfca59412e48\"" Apr 30 13:51:39.181288 containerd[1523]: time="2025-04-30T13:51:39.179279236Z" level=info msg="StartContainer for \"a29baf20f1172291ea6b12072404cd7736b9f5c22a1ae3837ec1dfca59412e48\"" Apr 30 13:51:39.237029 systemd[1]: Started cri-containerd-a29baf20f1172291ea6b12072404cd7736b9f5c22a1ae3837ec1dfca59412e48.scope - libcontainer container a29baf20f1172291ea6b12072404cd7736b9f5c22a1ae3837ec1dfca59412e48. Apr 30 13:51:39.295118 containerd[1523]: time="2025-04-30T13:51:39.294373051Z" level=info msg="StartContainer for \"a29baf20f1172291ea6b12072404cd7736b9f5c22a1ae3837ec1dfca59412e48\" returns successfully" Apr 30 13:51:39.461612 kubelet[1949]: I0430 13:51:39.461457 1949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-zjldz" podStartSLOduration=25.169659739 podStartE2EDuration="35.461406558s" podCreationTimestamp="2025-04-30 13:51:04 +0000 UTC" firstStartedPulling="2025-04-30 13:51:28.859054818 +0000 UTC m=+25.410460354" lastFinishedPulling="2025-04-30 13:51:39.150801628 +0000 UTC m=+35.702207173" observedRunningTime="2025-04-30 13:51:39.460211981 +0000 UTC m=+36.011617578" watchObservedRunningTime="2025-04-30 13:51:39.461406558 +0000 UTC m=+36.012812116" Apr 30 13:51:40.033044 kubelet[1949]: E0430 13:51:40.032924 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:40.161873 kubelet[1949]: I0430 13:51:40.161701 1949 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Apr 30 13:51:40.161873 kubelet[1949]: I0430 13:51:40.161788 1949 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Apr 30 13:51:41.033948 kubelet[1949]: E0430 13:51:41.033855 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:42.035068 kubelet[1949]: E0430 13:51:42.034970 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:43.036364 kubelet[1949]: E0430 13:51:43.036222 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:44.004193 kubelet[1949]: E0430 13:51:44.004109 1949 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:44.036977 kubelet[1949]: E0430 13:51:44.036882 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:45.037317 kubelet[1949]: E0430 13:51:45.037153 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:46.038407 kubelet[1949]: E0430 13:51:46.038314 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:46.156686 systemd[1]: Created slice kubepods-besteffort-pod2dc32b8f_6020_4753_81de_87b3d97f6d1d.slice - libcontainer container kubepods-besteffort-pod2dc32b8f_6020_4753_81de_87b3d97f6d1d.slice. Apr 30 13:51:46.277789 kubelet[1949]: I0430 13:51:46.277708 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/2dc32b8f-6020-4753-81de-87b3d97f6d1d-data\") pod \"nfs-server-provisioner-0\" (UID: \"2dc32b8f-6020-4753-81de-87b3d97f6d1d\") " pod="default/nfs-server-provisioner-0" Apr 30 13:51:46.278287 kubelet[1949]: I0430 13:51:46.278109 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qxmt4\" (UniqueName: \"kubernetes.io/projected/2dc32b8f-6020-4753-81de-87b3d97f6d1d-kube-api-access-qxmt4\") pod \"nfs-server-provisioner-0\" (UID: \"2dc32b8f-6020-4753-81de-87b3d97f6d1d\") " pod="default/nfs-server-provisioner-0" Apr 30 13:51:46.463408 containerd[1523]: time="2025-04-30T13:51:46.462697974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2dc32b8f-6020-4753-81de-87b3d97f6d1d,Namespace:default,Attempt:0,}" Apr 30 13:51:46.641183 systemd-networkd[1453]: cali60e51b789ff: Link UP Apr 30 13:51:46.642517 systemd-networkd[1453]: cali60e51b789ff: Gained carrier Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.538 [INFO][3601] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.17.190-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 2dc32b8f-6020-4753-81de-87b3d97f6d1d 1304 0 2025-04-30 13:51:46 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 10.230.17.190 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.17.190-k8s-nfs--server--provisioner--0-" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.538 [INFO][3601] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.582 [INFO][3612] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" HandleID="k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Workload="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.598 [INFO][3612] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" HandleID="k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Workload="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000384ae0), Attrs:map[string]string{"namespace":"default", "node":"10.230.17.190", "pod":"nfs-server-provisioner-0", "timestamp":"2025-04-30 13:51:46.582118568 +0000 UTC"}, Hostname:"10.230.17.190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.598 [INFO][3612] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.598 [INFO][3612] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.598 [INFO][3612] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.17.190' Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.602 [INFO][3612] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.607 [INFO][3612] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.614 [INFO][3612] ipam/ipam.go 489: Trying affinity for 192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.616 [INFO][3612] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.619 [INFO][3612] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.64/26 host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.619 [INFO][3612] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.64/26 handle="k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.622 [INFO][3612] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29 Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.627 [INFO][3612] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.64/26 handle="k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.634 [INFO][3612] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.67/26] block=192.168.101.64/26 handle="k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.634 [INFO][3612] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.67/26] handle="k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" host="10.230.17.190" Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.634 [INFO][3612] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:51:46.657749 containerd[1523]: 2025-04-30 13:51:46.634 [INFO][3612] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.67/26] IPv6=[] ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" HandleID="k8s-pod-network.889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Workload="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" Apr 30 13:51:46.658994 containerd[1523]: 2025-04-30 13:51:46.636 [INFO][3601] cni-plugin/k8s.go 386: Populated endpoint ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.17.190-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2dc32b8f-6020-4753-81de-87b3d97f6d1d", ResourceVersion:"1304", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.17.190", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.101.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:51:46.658994 containerd[1523]: 2025-04-30 13:51:46.636 [INFO][3601] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.67/32] ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" Apr 30 13:51:46.658994 containerd[1523]: 2025-04-30 13:51:46.636 [INFO][3601] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" Apr 30 13:51:46.658994 containerd[1523]: 2025-04-30 13:51:46.641 [INFO][3601] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" Apr 30 13:51:46.659465 containerd[1523]: 2025-04-30 13:51:46.643 [INFO][3601] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.17.190-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"2dc32b8f-6020-4753-81de-87b3d97f6d1d", ResourceVersion:"1304", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 51, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.17.190", ContainerID:"889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.101.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"02:ec:3b:70:5b:5c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:51:46.659465 containerd[1523]: 2025-04-30 13:51:46.654 [INFO][3601] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="10.230.17.190-k8s-nfs--server--provisioner--0-eth0" Apr 30 13:51:46.694374 containerd[1523]: time="2025-04-30T13:51:46.694098131Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:51:46.694374 containerd[1523]: time="2025-04-30T13:51:46.694215397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:51:46.694807 containerd[1523]: time="2025-04-30T13:51:46.694235016Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:46.695476 containerd[1523]: time="2025-04-30T13:51:46.695190069Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:51:46.736533 systemd[1]: Started cri-containerd-889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29.scope - libcontainer container 889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29. Apr 30 13:51:46.800320 containerd[1523]: time="2025-04-30T13:51:46.800268280Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:2dc32b8f-6020-4753-81de-87b3d97f6d1d,Namespace:default,Attempt:0,} returns sandbox id \"889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29\"" Apr 30 13:51:46.803130 containerd[1523]: time="2025-04-30T13:51:46.802899395Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Apr 30 13:51:47.039592 kubelet[1949]: E0430 13:51:47.039210 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:48.040412 kubelet[1949]: E0430 13:51:48.040056 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:48.235559 systemd-networkd[1453]: cali60e51b789ff: Gained IPv6LL Apr 30 13:51:49.040817 kubelet[1949]: E0430 13:51:49.040317 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:50.042612 kubelet[1949]: E0430 13:51:50.042422 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:50.127837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1873896150.mount: Deactivated successfully. Apr 30 13:51:51.043150 kubelet[1949]: E0430 13:51:51.043077 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:52.043896 kubelet[1949]: E0430 13:51:52.043685 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:53.046358 kubelet[1949]: E0430 13:51:53.044764 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:53.247280 containerd[1523]: time="2025-04-30T13:51:53.245561533Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:53.248207 containerd[1523]: time="2025-04-30T13:51:53.248165315Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039414" Apr 30 13:51:53.275359 containerd[1523]: time="2025-04-30T13:51:53.275290646Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:53.279114 containerd[1523]: time="2025-04-30T13:51:53.279079744Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:51:53.280804 containerd[1523]: time="2025-04-30T13:51:53.280757203Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 6.477777322s" Apr 30 13:51:53.280916 containerd[1523]: time="2025-04-30T13:51:53.280819108Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Apr 30 13:51:53.286569 containerd[1523]: time="2025-04-30T13:51:53.286528903Z" level=info msg="CreateContainer within sandbox \"889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Apr 30 13:51:53.309043 containerd[1523]: time="2025-04-30T13:51:53.308900785Z" level=info msg="CreateContainer within sandbox \"889628d429df656a8fe7eb266bae8691d7f7d151df2dcffcf47b2814a522df29\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"56a4c5a1d0d17feca2b0c752c3522740be35a198c93f6a2e74818e5e242f957f\"" Apr 30 13:51:53.310350 containerd[1523]: time="2025-04-30T13:51:53.310212137Z" level=info msg="StartContainer for \"56a4c5a1d0d17feca2b0c752c3522740be35a198c93f6a2e74818e5e242f957f\"" Apr 30 13:51:53.374372 systemd[1]: Started cri-containerd-56a4c5a1d0d17feca2b0c752c3522740be35a198c93f6a2e74818e5e242f957f.scope - libcontainer container 56a4c5a1d0d17feca2b0c752c3522740be35a198c93f6a2e74818e5e242f957f. Apr 30 13:51:53.451792 containerd[1523]: time="2025-04-30T13:51:53.451707728Z" level=info msg="StartContainer for \"56a4c5a1d0d17feca2b0c752c3522740be35a198c93f6a2e74818e5e242f957f\" returns successfully" Apr 30 13:51:53.600032 kubelet[1949]: I0430 13:51:53.597415 1949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.116938173 podStartE2EDuration="7.59735407s" podCreationTimestamp="2025-04-30 13:51:46 +0000 UTC" firstStartedPulling="2025-04-30 13:51:46.802510132 +0000 UTC m=+43.353915668" lastFinishedPulling="2025-04-30 13:51:53.282926027 +0000 UTC m=+49.834331565" observedRunningTime="2025-04-30 13:51:53.596496365 +0000 UTC m=+50.147901928" watchObservedRunningTime="2025-04-30 13:51:53.59735407 +0000 UTC m=+50.148759618" Apr 30 13:51:54.045630 kubelet[1949]: E0430 13:51:54.045515 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:55.046032 kubelet[1949]: E0430 13:51:55.045938 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:56.047163 kubelet[1949]: E0430 13:51:56.047065 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:57.048191 kubelet[1949]: E0430 13:51:57.048113 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:58.048918 kubelet[1949]: E0430 13:51:58.048853 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:51:59.049636 kubelet[1949]: E0430 13:51:59.049526 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:00.050338 kubelet[1949]: E0430 13:52:00.050177 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:01.050595 kubelet[1949]: E0430 13:52:01.050494 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:02.051106 kubelet[1949]: E0430 13:52:02.051028 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:03.052282 kubelet[1949]: E0430 13:52:03.052163 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:03.758543 systemd[1]: Created slice kubepods-besteffort-podaa65456c_df96_4bee_ac43_cf46dd417190.slice - libcontainer container kubepods-besteffort-podaa65456c_df96_4bee_ac43_cf46dd417190.slice. Apr 30 13:52:03.886891 systemd[1]: Started sshd@9-10.230.17.190:22-51.178.189.133:50248.service - OpenSSH per-connection server daemon (51.178.189.133:50248). Apr 30 13:52:03.913332 kubelet[1949]: I0430 13:52:03.912586 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mc9bb\" (UniqueName: \"kubernetes.io/projected/aa65456c-df96-4bee-ac43-cf46dd417190-kube-api-access-mc9bb\") pod \"test-pod-1\" (UID: \"aa65456c-df96-4bee-ac43-cf46dd417190\") " pod="default/test-pod-1" Apr 30 13:52:03.913332 kubelet[1949]: I0430 13:52:03.912688 1949 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-fb205e4f-4b93-4423-be0b-f86024331294\" (UniqueName: \"kubernetes.io/nfs/aa65456c-df96-4bee-ac43-cf46dd417190-pvc-fb205e4f-4b93-4423-be0b-f86024331294\") pod \"test-pod-1\" (UID: \"aa65456c-df96-4bee-ac43-cf46dd417190\") " pod="default/test-pod-1" Apr 30 13:52:03.995553 sshd[3787]: Invalid user from 51.178.189.133 port 50248 Apr 30 13:52:04.004076 kubelet[1949]: E0430 13:52:04.003983 1949 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:04.037436 containerd[1523]: time="2025-04-30T13:52:04.036650667Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" Apr 30 13:52:04.037436 containerd[1523]: time="2025-04-30T13:52:04.036987798Z" level=info msg="TearDown network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" successfully" Apr 30 13:52:04.037436 containerd[1523]: time="2025-04-30T13:52:04.037013366Z" level=info msg="StopPodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" returns successfully" Apr 30 13:52:04.046970 containerd[1523]: time="2025-04-30T13:52:04.046706174Z" level=info msg="RemovePodSandbox for \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" Apr 30 13:52:04.052928 kubelet[1949]: E0430 13:52:04.052439 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:04.056172 containerd[1523]: time="2025-04-30T13:52:04.055409175Z" level=info msg="Forcibly stopping sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\"" Apr 30 13:52:04.056172 containerd[1523]: time="2025-04-30T13:52:04.055571010Z" level=info msg="TearDown network for sandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" successfully" Apr 30 13:52:04.067634 kernel: FS-Cache: Loaded Apr 30 13:52:04.076895 containerd[1523]: time="2025-04-30T13:52:04.076838987Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.077448 containerd[1523]: time="2025-04-30T13:52:04.077137588Z" level=info msg="RemovePodSandbox \"860edaba9b516d8008c4a48eb9d70aa925fca42fb644d29f1b0b448042ff0b74\" returns successfully" Apr 30 13:52:04.078024 containerd[1523]: time="2025-04-30T13:52:04.077796419Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\"" Apr 30 13:52:04.078024 containerd[1523]: time="2025-04-30T13:52:04.077927190Z" level=info msg="TearDown network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" successfully" Apr 30 13:52:04.078024 containerd[1523]: time="2025-04-30T13:52:04.077946869Z" level=info msg="StopPodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" returns successfully" Apr 30 13:52:04.079884 containerd[1523]: time="2025-04-30T13:52:04.078650167Z" level=info msg="RemovePodSandbox for \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\"" Apr 30 13:52:04.079884 containerd[1523]: time="2025-04-30T13:52:04.078683759Z" level=info msg="Forcibly stopping sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\"" Apr 30 13:52:04.079884 containerd[1523]: time="2025-04-30T13:52:04.078794962Z" level=info msg="TearDown network for sandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" successfully" Apr 30 13:52:04.081463 containerd[1523]: time="2025-04-30T13:52:04.081430680Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.081653 containerd[1523]: time="2025-04-30T13:52:04.081624953Z" level=info msg="RemovePodSandbox \"9cfa30319da15dc4e493619ffb0863ded08f7e93103bf0dd7e75433d1e033334\" returns successfully" Apr 30 13:52:04.082176 containerd[1523]: time="2025-04-30T13:52:04.082148585Z" level=info msg="StopPodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\"" Apr 30 13:52:04.082446 containerd[1523]: time="2025-04-30T13:52:04.082419772Z" level=info msg="TearDown network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" successfully" Apr 30 13:52:04.082570 containerd[1523]: time="2025-04-30T13:52:04.082546316Z" level=info msg="StopPodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" returns successfully" Apr 30 13:52:04.083011 containerd[1523]: time="2025-04-30T13:52:04.082982197Z" level=info msg="RemovePodSandbox for \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\"" Apr 30 13:52:04.083287 containerd[1523]: time="2025-04-30T13:52:04.083231126Z" level=info msg="Forcibly stopping sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\"" Apr 30 13:52:04.083483 containerd[1523]: time="2025-04-30T13:52:04.083440644Z" level=info msg="TearDown network for sandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" successfully" Apr 30 13:52:04.085815 containerd[1523]: time="2025-04-30T13:52:04.085783838Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.086071 containerd[1523]: time="2025-04-30T13:52:04.085963011Z" level=info msg="RemovePodSandbox \"dff0c840e77d1408661453f5e3a7a4080e3954c004e9bc06e9a7c21213ee16e4\" returns successfully" Apr 30 13:52:04.086531 containerd[1523]: time="2025-04-30T13:52:04.086314830Z" level=info msg="StopPodSandbox for \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\"" Apr 30 13:52:04.086531 containerd[1523]: time="2025-04-30T13:52:04.086416250Z" level=info msg="TearDown network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\" successfully" Apr 30 13:52:04.086531 containerd[1523]: time="2025-04-30T13:52:04.086437190Z" level=info msg="StopPodSandbox for \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\" returns successfully" Apr 30 13:52:04.087172 containerd[1523]: time="2025-04-30T13:52:04.086990020Z" level=info msg="RemovePodSandbox for \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\"" Apr 30 13:52:04.087172 containerd[1523]: time="2025-04-30T13:52:04.087020881Z" level=info msg="Forcibly stopping sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\"" Apr 30 13:52:04.087172 containerd[1523]: time="2025-04-30T13:52:04.087098918Z" level=info msg="TearDown network for sandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\" successfully" Apr 30 13:52:04.097184 containerd[1523]: time="2025-04-30T13:52:04.097028945Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.097184 containerd[1523]: time="2025-04-30T13:52:04.097082125Z" level=info msg="RemovePodSandbox \"925fba2b0ce7e301fbfb908a332313b38e5ca97545febe140dbe5d1fbefe4ff6\" returns successfully" Apr 30 13:52:04.098785 containerd[1523]: time="2025-04-30T13:52:04.098576528Z" level=info msg="StopPodSandbox for \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\"" Apr 30 13:52:04.098785 containerd[1523]: time="2025-04-30T13:52:04.098701912Z" level=info msg="TearDown network for sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\" successfully" Apr 30 13:52:04.098785 containerd[1523]: time="2025-04-30T13:52:04.098720813Z" level=info msg="StopPodSandbox for \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\" returns successfully" Apr 30 13:52:04.099584 containerd[1523]: time="2025-04-30T13:52:04.099369671Z" level=info msg="RemovePodSandbox for \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\"" Apr 30 13:52:04.099584 containerd[1523]: time="2025-04-30T13:52:04.099402551Z" level=info msg="Forcibly stopping sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\"" Apr 30 13:52:04.099584 containerd[1523]: time="2025-04-30T13:52:04.099503377Z" level=info msg="TearDown network for sandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\" successfully" Apr 30 13:52:04.102923 containerd[1523]: time="2025-04-30T13:52:04.102297785Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.102923 containerd[1523]: time="2025-04-30T13:52:04.102342287Z" level=info msg="RemovePodSandbox \"a53d76da60d71b00b2409af9b524e1c351775eab7090504c9f5ca692745bed81\" returns successfully" Apr 30 13:52:04.102923 containerd[1523]: time="2025-04-30T13:52:04.102687653Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:52:04.102923 containerd[1523]: time="2025-04-30T13:52:04.102791372Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:52:04.102923 containerd[1523]: time="2025-04-30T13:52:04.102809348Z" level=info msg="StopPodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:52:04.104267 containerd[1523]: time="2025-04-30T13:52:04.103730079Z" level=info msg="RemovePodSandbox for \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:52:04.104267 containerd[1523]: time="2025-04-30T13:52:04.103761172Z" level=info msg="Forcibly stopping sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\"" Apr 30 13:52:04.104267 containerd[1523]: time="2025-04-30T13:52:04.103840699Z" level=info msg="TearDown network for sandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" successfully" Apr 30 13:52:04.106592 containerd[1523]: time="2025-04-30T13:52:04.106556799Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.107023 containerd[1523]: time="2025-04-30T13:52:04.106768843Z" level=info msg="RemovePodSandbox \"9c1f03a9d938ad3ecc36d7bf2f515183722dd6878ba8b44c01e7be1fec30f830\" returns successfully" Apr 30 13:52:04.107451 containerd[1523]: time="2025-04-30T13:52:04.107218108Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:52:04.107451 containerd[1523]: time="2025-04-30T13:52:04.107368165Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:52:04.107451 containerd[1523]: time="2025-04-30T13:52:04.107387620Z" level=info msg="StopPodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:52:04.108429 containerd[1523]: time="2025-04-30T13:52:04.108071395Z" level=info msg="RemovePodSandbox for \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:52:04.108429 containerd[1523]: time="2025-04-30T13:52:04.108225513Z" level=info msg="Forcibly stopping sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\"" Apr 30 13:52:04.108429 containerd[1523]: time="2025-04-30T13:52:04.108344797Z" level=info msg="TearDown network for sandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" successfully" Apr 30 13:52:04.111202 containerd[1523]: time="2025-04-30T13:52:04.110941843Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.111202 containerd[1523]: time="2025-04-30T13:52:04.110987837Z" level=info msg="RemovePodSandbox \"3e66a5a10cbd1f710081f764148374940e7f2f09161d1911af647241f9616981\" returns successfully" Apr 30 13:52:04.111842 containerd[1523]: time="2025-04-30T13:52:04.111652629Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:52:04.111842 containerd[1523]: time="2025-04-30T13:52:04.111759759Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:52:04.111842 containerd[1523]: time="2025-04-30T13:52:04.111777815Z" level=info msg="StopPodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:52:04.112721 containerd[1523]: time="2025-04-30T13:52:04.112494537Z" level=info msg="RemovePodSandbox for \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:52:04.112721 containerd[1523]: time="2025-04-30T13:52:04.112526039Z" level=info msg="Forcibly stopping sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\"" Apr 30 13:52:04.112721 containerd[1523]: time="2025-04-30T13:52:04.112626181Z" level=info msg="TearDown network for sandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" successfully" Apr 30 13:52:04.116179 containerd[1523]: time="2025-04-30T13:52:04.115633814Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.116179 containerd[1523]: time="2025-04-30T13:52:04.115683792Z" level=info msg="RemovePodSandbox \"41c79e6807c93a65a01b5ef1f80a8a291b44ab3ee7a60a4be3efa65a15a4abbf\" returns successfully" Apr 30 13:52:04.116179 containerd[1523]: time="2025-04-30T13:52:04.115994279Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:52:04.116179 containerd[1523]: time="2025-04-30T13:52:04.116095672Z" level=info msg="TearDown network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" successfully" Apr 30 13:52:04.116179 containerd[1523]: time="2025-04-30T13:52:04.116115630Z" level=info msg="StopPodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" returns successfully" Apr 30 13:52:04.117101 containerd[1523]: time="2025-04-30T13:52:04.116981603Z" level=info msg="RemovePodSandbox for \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:52:04.117101 containerd[1523]: time="2025-04-30T13:52:04.117052680Z" level=info msg="Forcibly stopping sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\"" Apr 30 13:52:04.117682 containerd[1523]: time="2025-04-30T13:52:04.117459056Z" level=info msg="TearDown network for sandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" successfully" Apr 30 13:52:04.120214 containerd[1523]: time="2025-04-30T13:52:04.120074559Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.120214 containerd[1523]: time="2025-04-30T13:52:04.120130123Z" level=info msg="RemovePodSandbox \"d771dfdfbb8401cef38150c82c665425c4e5471fe6f6b3a035260b7d565ec4c9\" returns successfully" Apr 30 13:52:04.120886 containerd[1523]: time="2025-04-30T13:52:04.120660032Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" Apr 30 13:52:04.120886 containerd[1523]: time="2025-04-30T13:52:04.120765534Z" level=info msg="TearDown network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" successfully" Apr 30 13:52:04.120886 containerd[1523]: time="2025-04-30T13:52:04.120784037Z" level=info msg="StopPodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" returns successfully" Apr 30 13:52:04.122299 containerd[1523]: time="2025-04-30T13:52:04.121509457Z" level=info msg="RemovePodSandbox for \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" Apr 30 13:52:04.122299 containerd[1523]: time="2025-04-30T13:52:04.121542375Z" level=info msg="Forcibly stopping sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\"" Apr 30 13:52:04.122299 containerd[1523]: time="2025-04-30T13:52:04.121645865Z" level=info msg="TearDown network for sandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" successfully" Apr 30 13:52:04.124467 containerd[1523]: time="2025-04-30T13:52:04.124436498Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.124722 containerd[1523]: time="2025-04-30T13:52:04.124694296Z" level=info msg="RemovePodSandbox \"614bc0708b949fb00c15df5cb9340340c816f249e04be2448ca3a58d074d6160\" returns successfully" Apr 30 13:52:04.125354 containerd[1523]: time="2025-04-30T13:52:04.125327198Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\"" Apr 30 13:52:04.125592 containerd[1523]: time="2025-04-30T13:52:04.125566160Z" level=info msg="TearDown network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" successfully" Apr 30 13:52:04.125808 containerd[1523]: time="2025-04-30T13:52:04.125721362Z" level=info msg="StopPodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" returns successfully" Apr 30 13:52:04.126305 containerd[1523]: time="2025-04-30T13:52:04.126157121Z" level=info msg="RemovePodSandbox for \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\"" Apr 30 13:52:04.126305 containerd[1523]: time="2025-04-30T13:52:04.126190940Z" level=info msg="Forcibly stopping sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\"" Apr 30 13:52:04.126856 containerd[1523]: time="2025-04-30T13:52:04.126649115Z" level=info msg="TearDown network for sandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" successfully" Apr 30 13:52:04.129478 containerd[1523]: time="2025-04-30T13:52:04.129309897Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.129478 containerd[1523]: time="2025-04-30T13:52:04.129438974Z" level=info msg="RemovePodSandbox \"71e56577daadeb2dcb129039e33a89740d1cf3f492859e2751e62aa86dc2e05d\" returns successfully" Apr 30 13:52:04.130293 containerd[1523]: time="2025-04-30T13:52:04.130066371Z" level=info msg="StopPodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\"" Apr 30 13:52:04.130543 containerd[1523]: time="2025-04-30T13:52:04.130402177Z" level=info msg="TearDown network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" successfully" Apr 30 13:52:04.130865 containerd[1523]: time="2025-04-30T13:52:04.130423598Z" level=info msg="StopPodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" returns successfully" Apr 30 13:52:04.131792 containerd[1523]: time="2025-04-30T13:52:04.131747950Z" level=info msg="RemovePodSandbox for \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\"" Apr 30 13:52:04.131982 containerd[1523]: time="2025-04-30T13:52:04.131868058Z" level=info msg="Forcibly stopping sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\"" Apr 30 13:52:04.133067 containerd[1523]: time="2025-04-30T13:52:04.132129550Z" level=info msg="TearDown network for sandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" successfully" Apr 30 13:52:04.134741 containerd[1523]: time="2025-04-30T13:52:04.134676778Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.135013 containerd[1523]: time="2025-04-30T13:52:04.134951504Z" level=info msg="RemovePodSandbox \"38e33c66d84cf46acaab61b4053a3baa21b1edf0c1362b439e0b213f5a1a82f3\" returns successfully" Apr 30 13:52:04.135793 containerd[1523]: time="2025-04-30T13:52:04.135752895Z" level=info msg="StopPodSandbox for \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\"" Apr 30 13:52:04.136519 containerd[1523]: time="2025-04-30T13:52:04.136459454Z" level=info msg="TearDown network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\" successfully" Apr 30 13:52:04.136931 containerd[1523]: time="2025-04-30T13:52:04.136880802Z" level=info msg="StopPodSandbox for \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\" returns successfully" Apr 30 13:52:04.141053 containerd[1523]: time="2025-04-30T13:52:04.140030330Z" level=info msg="RemovePodSandbox for \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\"" Apr 30 13:52:04.141053 containerd[1523]: time="2025-04-30T13:52:04.140065213Z" level=info msg="Forcibly stopping sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\"" Apr 30 13:52:04.141053 containerd[1523]: time="2025-04-30T13:52:04.140149645Z" level=info msg="TearDown network for sandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\" successfully" Apr 30 13:52:04.143396 containerd[1523]: time="2025-04-30T13:52:04.143353369Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.143587 containerd[1523]: time="2025-04-30T13:52:04.143558515Z" level=info msg="RemovePodSandbox \"21537eb610e76972cdbd866141af9c9a1e90ba00497b3f9a2d095306ebff2ab1\" returns successfully" Apr 30 13:52:04.151145 containerd[1523]: time="2025-04-30T13:52:04.151105682Z" level=info msg="StopPodSandbox for \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\"" Apr 30 13:52:04.154364 kernel: RPC: Registered named UNIX socket transport module. Apr 30 13:52:04.154477 kernel: RPC: Registered udp transport module. Apr 30 13:52:04.154508 kernel: RPC: Registered tcp transport module. Apr 30 13:52:04.156227 kernel: RPC: Registered tcp-with-tls transport module. Apr 30 13:52:04.156316 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Apr 30 13:52:04.156368 containerd[1523]: time="2025-04-30T13:52:04.156060462Z" level=info msg="TearDown network for sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\" successfully" Apr 30 13:52:04.156368 containerd[1523]: time="2025-04-30T13:52:04.156086179Z" level=info msg="StopPodSandbox for \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\" returns successfully" Apr 30 13:52:04.161424 containerd[1523]: time="2025-04-30T13:52:04.159482836Z" level=info msg="RemovePodSandbox for \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\"" Apr 30 13:52:04.161424 containerd[1523]: time="2025-04-30T13:52:04.159521744Z" level=info msg="Forcibly stopping sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\"" Apr 30 13:52:04.161424 containerd[1523]: time="2025-04-30T13:52:04.159626758Z" level=info msg="TearDown network for sandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\" successfully" Apr 30 13:52:04.174064 containerd[1523]: time="2025-04-30T13:52:04.174010756Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Apr 30 13:52:04.174283 containerd[1523]: time="2025-04-30T13:52:04.174075306Z" level=info msg="RemovePodSandbox \"b6c661df0a1e14fea601f34409396b0a3939c559c8cc9ff5c6df2690c55b475e\" returns successfully" Apr 30 13:52:04.436938 kernel: NFS: Registering the id_resolver key type Apr 30 13:52:04.437147 kernel: Key type id_resolver registered Apr 30 13:52:04.438268 kernel: Key type id_legacy registered Apr 30 13:52:04.487918 nfsidmap[3807]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Apr 30 13:52:04.495622 nfsidmap[3810]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain 'gb1.brightbox.com' Apr 30 13:52:04.669134 containerd[1523]: time="2025-04-30T13:52:04.669046564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:aa65456c-df96-4bee-ac43-cf46dd417190,Namespace:default,Attempt:0,}" Apr 30 13:52:04.893015 systemd-networkd[1453]: cali5ec59c6bf6e: Link UP Apr 30 13:52:04.893964 systemd-networkd[1453]: cali5ec59c6bf6e: Gained carrier Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.763 [INFO][3814] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {10.230.17.190-k8s-test--pod--1-eth0 default aa65456c-df96-4bee-ac43-cf46dd417190 1367 0 2025-04-30 13:51:48 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 10.230.17.190 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.17.190-k8s-test--pod--1-" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.763 [INFO][3814] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.17.190-k8s-test--pod--1-eth0" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.813 [INFO][3825] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" HandleID="k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Workload="10.230.17.190-k8s-test--pod--1-eth0" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.829 [INFO][3825] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" HandleID="k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Workload="10.230.17.190-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003ba360), Attrs:map[string]string{"namespace":"default", "node":"10.230.17.190", "pod":"test-pod-1", "timestamp":"2025-04-30 13:52:04.813085462 +0000 UTC"}, Hostname:"10.230.17.190", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.830 [INFO][3825] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.830 [INFO][3825] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.830 [INFO][3825] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '10.230.17.190' Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.843 [INFO][3825] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.850 [INFO][3825] ipam/ipam.go 372: Looking up existing affinities for host host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.859 [INFO][3825] ipam/ipam.go 489: Trying affinity for 192.168.101.64/26 host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.862 [INFO][3825] ipam/ipam.go 155: Attempting to load block cidr=192.168.101.64/26 host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.866 [INFO][3825] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.101.64/26 host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.866 [INFO][3825] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.101.64/26 handle="k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.869 [INFO][3825] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7 Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.875 [INFO][3825] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.101.64/26 handle="k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.886 [INFO][3825] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.101.68/26] block=192.168.101.64/26 handle="k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.886 [INFO][3825] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.101.68/26] handle="k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" host="10.230.17.190" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.886 [INFO][3825] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.886 [INFO][3825] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.101.68/26] IPv6=[] ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" HandleID="k8s-pod-network.2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Workload="10.230.17.190-k8s-test--pod--1-eth0" Apr 30 13:52:04.913151 containerd[1523]: 2025-04-30 13:52:04.888 [INFO][3814] cni-plugin/k8s.go 386: Populated endpoint ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.17.190-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.17.190-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"aa65456c-df96-4bee-ac43-cf46dd417190", ResourceVersion:"1367", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.17.190", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:52:04.914353 containerd[1523]: 2025-04-30 13:52:04.888 [INFO][3814] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.101.68/32] ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.17.190-k8s-test--pod--1-eth0" Apr 30 13:52:04.914353 containerd[1523]: 2025-04-30 13:52:04.888 [INFO][3814] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.17.190-k8s-test--pod--1-eth0" Apr 30 13:52:04.914353 containerd[1523]: 2025-04-30 13:52:04.894 [INFO][3814] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.17.190-k8s-test--pod--1-eth0" Apr 30 13:52:04.914353 containerd[1523]: 2025-04-30 13:52:04.895 [INFO][3814] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.17.190-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"10.230.17.190-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"aa65456c-df96-4bee-ac43-cf46dd417190", ResourceVersion:"1367", Generation:0, CreationTimestamp:time.Date(2025, time.April, 30, 13, 51, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"10.230.17.190", ContainerID:"2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.101.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"ea:50:ec:1e:47:89", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Apr 30 13:52:04.914353 containerd[1523]: 2025-04-30 13:52:04.907 [INFO][3814] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="10.230.17.190-k8s-test--pod--1-eth0" Apr 30 13:52:04.953135 containerd[1523]: time="2025-04-30T13:52:04.952700113Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Apr 30 13:52:04.953135 containerd[1523]: time="2025-04-30T13:52:04.952810321Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Apr 30 13:52:04.953135 containerd[1523]: time="2025-04-30T13:52:04.952834175Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:52:04.953135 containerd[1523]: time="2025-04-30T13:52:04.952967650Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Apr 30 13:52:04.981492 systemd[1]: Started cri-containerd-2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7.scope - libcontainer container 2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7. Apr 30 13:52:05.048923 containerd[1523]: time="2025-04-30T13:52:05.048760819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:aa65456c-df96-4bee-ac43-cf46dd417190,Namespace:default,Attempt:0,} returns sandbox id \"2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7\"" Apr 30 13:52:05.053301 kubelet[1949]: E0430 13:52:05.053238 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:05.060630 containerd[1523]: time="2025-04-30T13:52:05.059410916Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Apr 30 13:52:05.612198 containerd[1523]: time="2025-04-30T13:52:05.611685337Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Apr 30 13:52:05.614269 containerd[1523]: time="2025-04-30T13:52:05.613897207Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Apr 30 13:52:05.615921 containerd[1523]: time="2025-04-30T13:52:05.615882206Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:727fa1dd2cee1ccca9e775e517739b20d5d47bd36b6b5bde8aa708de1348532b\", size \"73306154\" in 556.408053ms" Apr 30 13:52:05.616162 containerd[1523]: time="2025-04-30T13:52:05.616054412Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:244abd08b283a396de679587fab5dec3f2b427a1cc0ada5b813839fcb187f9b8\"" Apr 30 13:52:05.620274 containerd[1523]: time="2025-04-30T13:52:05.619553402Z" level=info msg="CreateContainer within sandbox \"2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7\" for container &ContainerMetadata{Name:test,Attempt:0,}" Apr 30 13:52:05.653963 containerd[1523]: time="2025-04-30T13:52:05.653879724Z" level=info msg="CreateContainer within sandbox \"2a538f1f804ef3249634837b0d2cb2a0200e17c05bb488771ae6c1364e7112f7\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"09cc6ab9010a4fcc1fe133e22adb5a3660b8a38e7010953846b3d9a36a562e77\"" Apr 30 13:52:05.655161 containerd[1523]: time="2025-04-30T13:52:05.655044165Z" level=info msg="StartContainer for \"09cc6ab9010a4fcc1fe133e22adb5a3660b8a38e7010953846b3d9a36a562e77\"" Apr 30 13:52:05.705589 systemd[1]: Started cri-containerd-09cc6ab9010a4fcc1fe133e22adb5a3660b8a38e7010953846b3d9a36a562e77.scope - libcontainer container 09cc6ab9010a4fcc1fe133e22adb5a3660b8a38e7010953846b3d9a36a562e77. Apr 30 13:52:05.748867 containerd[1523]: time="2025-04-30T13:52:05.748724927Z" level=info msg="StartContainer for \"09cc6ab9010a4fcc1fe133e22adb5a3660b8a38e7010953846b3d9a36a562e77\" returns successfully" Apr 30 13:52:06.036788 systemd[1]: run-containerd-runc-k8s.io-09cc6ab9010a4fcc1fe133e22adb5a3660b8a38e7010953846b3d9a36a562e77-runc.F78LSr.mount: Deactivated successfully. Apr 30 13:52:06.054333 kubelet[1949]: E0430 13:52:06.054282 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:06.282783 systemd-networkd[1453]: cali5ec59c6bf6e: Gained IPv6LL Apr 30 13:52:06.662581 kubelet[1949]: I0430 13:52:06.662405 1949 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=18.102376459 podStartE2EDuration="18.662338855s" podCreationTimestamp="2025-04-30 13:51:48 +0000 UTC" firstStartedPulling="2025-04-30 13:52:05.057956112 +0000 UTC m=+61.609361662" lastFinishedPulling="2025-04-30 13:52:05.617918517 +0000 UTC m=+62.169324058" observedRunningTime="2025-04-30 13:52:06.661522823 +0000 UTC m=+63.212928374" watchObservedRunningTime="2025-04-30 13:52:06.662338855 +0000 UTC m=+63.213744405" Apr 30 13:52:07.054747 kubelet[1949]: E0430 13:52:07.054678 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:08.055326 kubelet[1949]: E0430 13:52:08.055216 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:09.055944 kubelet[1949]: E0430 13:52:09.055871 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:10.056829 kubelet[1949]: E0430 13:52:10.056757 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:11.057423 kubelet[1949]: E0430 13:52:11.057329 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:11.853937 sshd[3787]: Connection closed by invalid user 51.178.189.133 port 50248 [preauth] Apr 30 13:52:11.855943 systemd[1]: sshd@9-10.230.17.190:22-51.178.189.133:50248.service: Deactivated successfully. Apr 30 13:52:12.058466 kubelet[1949]: E0430 13:52:12.058370 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:13.059292 kubelet[1949]: E0430 13:52:13.059163 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:14.059839 kubelet[1949]: E0430 13:52:14.059763 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Apr 30 13:52:15.060580 kubelet[1949]: E0430 13:52:15.060459 1949 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"