Feb 13 16:19:54.934400 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 13:54:58 -00 2025 Feb 13 16:19:54.934448 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:19:54.934470 kernel: BIOS-provided physical RAM map: Feb 13 16:19:54.934485 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 16:19:54.934499 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 16:19:54.934513 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 16:19:54.934531 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 16:19:54.934543 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 16:19:54.934554 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 16:19:54.934568 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 16:19:54.934579 kernel: NX (Execute Disable) protection: active Feb 13 16:19:54.934589 kernel: APIC: Static calls initialized Feb 13 16:19:54.934609 kernel: SMBIOS 2.8 present. Feb 13 16:19:54.934621 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 16:19:54.934634 kernel: Hypervisor detected: KVM Feb 13 16:19:54.934650 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 16:19:54.934664 kernel: kvm-clock: using sched offset of 3156892204 cycles Feb 13 16:19:54.934676 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 16:19:54.934687 kernel: tsc: Detected 2494.138 MHz processor Feb 13 16:19:54.934697 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 16:19:54.934708 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 16:19:54.934720 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 16:19:54.934735 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 16:19:54.934747 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 16:19:54.934764 kernel: ACPI: Early table checksum verification disabled Feb 13 16:19:54.934777 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 16:19:54.934787 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:19:54.934794 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:19:54.934803 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:19:54.934810 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 16:19:54.934818 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:19:54.934826 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:19:54.934834 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:19:54.934844 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 16:19:54.934852 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 16:19:54.934860 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 16:19:54.934868 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 16:19:54.934875 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 16:19:54.934883 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 16:19:54.934891 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 16:19:54.934905 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 16:19:54.934913 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 16:19:54.934921 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 16:19:54.934930 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 16:19:54.934938 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 16:19:54.934949 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 16:19:54.934959 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 16:19:54.935009 kernel: Zone ranges: Feb 13 16:19:54.935021 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 16:19:54.935035 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 16:19:54.935049 kernel: Normal empty Feb 13 16:19:54.935061 kernel: Movable zone start for each node Feb 13 16:19:54.935074 kernel: Early memory node ranges Feb 13 16:19:54.935082 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 16:19:54.935091 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 16:19:54.935099 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 16:19:54.935119 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 16:19:54.935131 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 16:19:54.935148 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 16:19:54.935158 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 16:19:54.935167 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 16:19:54.935175 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 16:19:54.935184 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 16:19:54.935192 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 16:19:54.935200 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 16:19:54.935216 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 16:19:54.935225 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 16:19:54.935234 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 16:19:54.935242 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 16:19:54.935250 kernel: TSC deadline timer available Feb 13 16:19:54.935259 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 16:19:54.935279 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 16:19:54.935288 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 16:19:54.935525 kernel: Booting paravirtualized kernel on KVM Feb 13 16:19:54.935534 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 16:19:54.935547 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 16:19:54.935555 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 16:19:54.935564 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 16:19:54.935572 kernel: pcpu-alloc: [0] 0 1 Feb 13 16:19:54.935580 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 16:19:54.935591 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:19:54.935600 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 16:19:54.935611 kernel: random: crng init done Feb 13 16:19:54.935619 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 16:19:54.935628 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 16:19:54.935636 kernel: Fallback order for Node 0: 0 Feb 13 16:19:54.935644 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 16:19:54.935653 kernel: Policy zone: DMA32 Feb 13 16:19:54.935662 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 16:19:54.935670 kernel: Memory: 1971200K/2096612K available (12288K kernel code, 2299K rwdata, 22736K rodata, 42976K init, 2216K bss, 125152K reserved, 0K cma-reserved) Feb 13 16:19:54.935679 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 16:19:54.935690 kernel: Kernel/User page tables isolation: enabled Feb 13 16:19:54.935698 kernel: ftrace: allocating 37920 entries in 149 pages Feb 13 16:19:54.935707 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 16:19:54.935715 kernel: Dynamic Preempt: voluntary Feb 13 16:19:54.935723 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 16:19:54.935733 kernel: rcu: RCU event tracing is enabled. Feb 13 16:19:54.935741 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 16:19:54.935750 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 16:19:54.935758 kernel: Rude variant of Tasks RCU enabled. Feb 13 16:19:54.935767 kernel: Tracing variant of Tasks RCU enabled. Feb 13 16:19:54.935778 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 16:19:54.935787 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 16:19:54.935795 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 16:19:54.935803 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 16:19:54.935814 kernel: Console: colour VGA+ 80x25 Feb 13 16:19:54.935822 kernel: printk: console [tty0] enabled Feb 13 16:19:54.935831 kernel: printk: console [ttyS0] enabled Feb 13 16:19:54.935839 kernel: ACPI: Core revision 20230628 Feb 13 16:19:54.935847 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 16:19:54.935858 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 16:19:54.935867 kernel: x2apic enabled Feb 13 16:19:54.935875 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 16:19:54.935883 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 16:19:54.935892 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 13 16:19:54.935901 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Feb 13 16:19:54.935909 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 16:19:54.935917 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 16:19:54.935936 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 16:19:54.935945 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 16:19:54.935954 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 16:19:54.935965 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 16:19:54.935974 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 16:19:54.935982 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 16:19:54.935991 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 16:19:54.936000 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 16:19:54.936009 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 16:19:54.936022 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 16:19:54.936031 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 16:19:54.936040 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 16:19:54.936049 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 16:19:54.936058 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 16:19:54.936067 kernel: Freeing SMP alternatives memory: 32K Feb 13 16:19:54.936075 kernel: pid_max: default: 32768 minimum: 301 Feb 13 16:19:54.936084 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 16:19:54.936096 kernel: landlock: Up and running. Feb 13 16:19:54.936105 kernel: SELinux: Initializing. Feb 13 16:19:54.936114 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 16:19:54.936123 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 16:19:54.936132 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 16:19:54.936141 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:19:54.936150 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:19:54.936159 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 16:19:54.936168 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 16:19:54.936180 kernel: signal: max sigframe size: 1776 Feb 13 16:19:54.936188 kernel: rcu: Hierarchical SRCU implementation. Feb 13 16:19:54.936200 kernel: rcu: Max phase no-delay instances is 400. Feb 13 16:19:54.936210 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 16:19:54.936218 kernel: smp: Bringing up secondary CPUs ... Feb 13 16:19:54.936227 kernel: smpboot: x86: Booting SMP configuration: Feb 13 16:19:54.936236 kernel: .... node #0, CPUs: #1 Feb 13 16:19:54.936244 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 16:19:54.936255 kernel: smpboot: Max logical packages: 1 Feb 13 16:19:54.938328 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Feb 13 16:19:54.938342 kernel: devtmpfs: initialized Feb 13 16:19:54.938352 kernel: x86/mm: Memory block size: 128MB Feb 13 16:19:54.938367 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 16:19:54.938376 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 16:19:54.938385 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 16:19:54.938394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 16:19:54.938403 kernel: audit: initializing netlink subsys (disabled) Feb 13 16:19:54.938412 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 16:19:54.938429 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 16:19:54.938438 kernel: audit: type=2000 audit(1739463593.857:1): state=initialized audit_enabled=0 res=1 Feb 13 16:19:54.938447 kernel: cpuidle: using governor menu Feb 13 16:19:54.938456 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 16:19:54.938464 kernel: dca service started, version 1.12.1 Feb 13 16:19:54.938473 kernel: PCI: Using configuration type 1 for base access Feb 13 16:19:54.938482 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 16:19:54.938491 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 16:19:54.938500 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 16:19:54.938512 kernel: ACPI: Added _OSI(Module Device) Feb 13 16:19:54.938521 kernel: ACPI: Added _OSI(Processor Device) Feb 13 16:19:54.938530 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 16:19:54.938538 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 16:19:54.938547 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 16:19:54.938556 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 16:19:54.938565 kernel: ACPI: Interpreter enabled Feb 13 16:19:54.938574 kernel: ACPI: PM: (supports S0 S5) Feb 13 16:19:54.938582 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 16:19:54.938595 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 16:19:54.938608 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 16:19:54.938622 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 16:19:54.938635 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 16:19:54.938910 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 16:19:54.939091 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 16:19:54.939218 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 16:19:54.939237 kernel: acpiphp: Slot [3] registered Feb 13 16:19:54.939247 kernel: acpiphp: Slot [4] registered Feb 13 16:19:54.939256 kernel: acpiphp: Slot [5] registered Feb 13 16:19:54.940482 kernel: acpiphp: Slot [6] registered Feb 13 16:19:54.940507 kernel: acpiphp: Slot [7] registered Feb 13 16:19:54.940524 kernel: acpiphp: Slot [8] registered Feb 13 16:19:54.940535 kernel: acpiphp: Slot [9] registered Feb 13 16:19:54.940544 kernel: acpiphp: Slot [10] registered Feb 13 16:19:54.940553 kernel: acpiphp: Slot [11] registered Feb 13 16:19:54.940569 kernel: acpiphp: Slot [12] registered Feb 13 16:19:54.940578 kernel: acpiphp: Slot [13] registered Feb 13 16:19:54.940587 kernel: acpiphp: Slot [14] registered Feb 13 16:19:54.940596 kernel: acpiphp: Slot [15] registered Feb 13 16:19:54.940604 kernel: acpiphp: Slot [16] registered Feb 13 16:19:54.940613 kernel: acpiphp: Slot [17] registered Feb 13 16:19:54.940623 kernel: acpiphp: Slot [18] registered Feb 13 16:19:54.940638 kernel: acpiphp: Slot [19] registered Feb 13 16:19:54.940655 kernel: acpiphp: Slot [20] registered Feb 13 16:19:54.940669 kernel: acpiphp: Slot [21] registered Feb 13 16:19:54.940687 kernel: acpiphp: Slot [22] registered Feb 13 16:19:54.940701 kernel: acpiphp: Slot [23] registered Feb 13 16:19:54.940715 kernel: acpiphp: Slot [24] registered Feb 13 16:19:54.940724 kernel: acpiphp: Slot [25] registered Feb 13 16:19:54.940733 kernel: acpiphp: Slot [26] registered Feb 13 16:19:54.940742 kernel: acpiphp: Slot [27] registered Feb 13 16:19:54.940751 kernel: acpiphp: Slot [28] registered Feb 13 16:19:54.940760 kernel: acpiphp: Slot [29] registered Feb 13 16:19:54.940769 kernel: acpiphp: Slot [30] registered Feb 13 16:19:54.940780 kernel: acpiphp: Slot [31] registered Feb 13 16:19:54.940789 kernel: PCI host bridge to bus 0000:00 Feb 13 16:19:54.940961 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 16:19:54.941100 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 16:19:54.941227 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 16:19:54.942404 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 16:19:54.942510 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 16:19:54.942598 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 16:19:54.942785 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 16:19:54.942909 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 16:19:54.943056 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 16:19:54.943186 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 16:19:54.944371 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 16:19:54.944498 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 16:19:54.944610 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 16:19:54.944729 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 16:19:54.944852 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 16:19:54.944991 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 16:19:54.945107 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 16:19:54.945234 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 16:19:54.946527 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 16:19:54.946676 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 16:19:54.946787 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 16:19:54.946887 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 16:19:54.946997 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 16:19:54.947153 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 16:19:54.947314 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 16:19:54.947494 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 16:19:54.947642 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 16:19:54.947747 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 16:19:54.947882 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 16:19:54.948057 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 16:19:54.948204 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 16:19:54.950459 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 16:19:54.950648 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 16:19:54.950943 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 16:19:54.951138 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 16:19:54.951444 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 16:19:54.951583 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 16:19:54.951699 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 16:19:54.951794 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 16:19:54.951896 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 16:19:54.952003 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 16:19:54.952107 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 16:19:54.952206 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 16:19:54.953876 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 16:19:54.953994 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 16:19:54.954115 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 16:19:54.954219 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 16:19:54.954365 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 16:19:54.954382 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 16:19:54.954392 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 16:19:54.954401 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 16:19:54.954410 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 16:19:54.954419 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 16:19:54.954434 kernel: iommu: Default domain type: Translated Feb 13 16:19:54.954442 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 16:19:54.954451 kernel: PCI: Using ACPI for IRQ routing Feb 13 16:19:54.954461 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 16:19:54.954470 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 16:19:54.954479 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 16:19:54.954580 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 16:19:54.954672 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 16:19:54.954774 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 16:19:54.954786 kernel: vgaarb: loaded Feb 13 16:19:54.954795 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 16:19:54.954804 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 16:19:54.954814 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 16:19:54.954823 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 16:19:54.954832 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 16:19:54.954841 kernel: pnp: PnP ACPI init Feb 13 16:19:54.954850 kernel: pnp: PnP ACPI: found 4 devices Feb 13 16:19:54.954862 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 16:19:54.954871 kernel: NET: Registered PF_INET protocol family Feb 13 16:19:54.954880 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 16:19:54.954889 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 16:19:54.954898 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 16:19:54.954907 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 16:19:54.954916 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 16:19:54.954924 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 16:19:54.954933 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 16:19:54.954945 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 16:19:54.954953 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 16:19:54.954962 kernel: NET: Registered PF_XDP protocol family Feb 13 16:19:54.955087 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 16:19:54.955175 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 16:19:54.955259 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 16:19:54.955407 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 16:19:54.955490 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 16:19:54.955603 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 16:19:54.955704 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 16:19:54.955722 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 16:19:54.955820 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 30756 usecs Feb 13 16:19:54.955832 kernel: PCI: CLS 0 bytes, default 64 Feb 13 16:19:54.955841 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 16:19:54.955850 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 13 16:19:54.955859 kernel: Initialise system trusted keyrings Feb 13 16:19:54.955872 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 16:19:54.955881 kernel: Key type asymmetric registered Feb 13 16:19:54.955889 kernel: Asymmetric key parser 'x509' registered Feb 13 16:19:54.955898 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 16:19:54.955907 kernel: io scheduler mq-deadline registered Feb 13 16:19:54.955916 kernel: io scheduler kyber registered Feb 13 16:19:54.955925 kernel: io scheduler bfq registered Feb 13 16:19:54.955934 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 16:19:54.955943 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 16:19:54.955952 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 16:19:54.955964 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 16:19:54.955972 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 16:19:54.955982 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 16:19:54.955991 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 16:19:54.955999 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 16:19:54.956008 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 16:19:54.956017 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 16:19:54.956142 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 16:19:54.956249 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 16:19:54.956359 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T16:19:54 UTC (1739463594) Feb 13 16:19:54.956445 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 16:19:54.956457 kernel: intel_pstate: CPU model not supported Feb 13 16:19:54.956475 kernel: NET: Registered PF_INET6 protocol family Feb 13 16:19:54.956483 kernel: Segment Routing with IPv6 Feb 13 16:19:54.956492 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 16:19:54.956501 kernel: NET: Registered PF_PACKET protocol family Feb 13 16:19:54.956514 kernel: Key type dns_resolver registered Feb 13 16:19:54.956523 kernel: IPI shorthand broadcast: enabled Feb 13 16:19:54.956532 kernel: sched_clock: Marking stable (947004984, 90090402)->(1057498810, -20403424) Feb 13 16:19:54.956545 kernel: registered taskstats version 1 Feb 13 16:19:54.956554 kernel: Loading compiled-in X.509 certificates Feb 13 16:19:54.956563 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: 9ec780e1db69d46be90bbba73ae62b0106e27ae0' Feb 13 16:19:54.956572 kernel: Key type .fscrypt registered Feb 13 16:19:54.956580 kernel: Key type fscrypt-provisioning registered Feb 13 16:19:54.956589 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 16:19:54.956601 kernel: ima: Allocated hash algorithm: sha1 Feb 13 16:19:54.956610 kernel: ima: No architecture policies found Feb 13 16:19:54.956619 kernel: clk: Disabling unused clocks Feb 13 16:19:54.956627 kernel: Freeing unused kernel image (initmem) memory: 42976K Feb 13 16:19:54.956636 kernel: Write protecting the kernel read-only data: 36864k Feb 13 16:19:54.956663 kernel: Freeing unused kernel image (rodata/data gap) memory: 1840K Feb 13 16:19:54.956676 kernel: Run /init as init process Feb 13 16:19:54.956685 kernel: with arguments: Feb 13 16:19:54.956707 kernel: /init Feb 13 16:19:54.956721 kernel: with environment: Feb 13 16:19:54.956730 kernel: HOME=/ Feb 13 16:19:54.956741 kernel: TERM=linux Feb 13 16:19:54.956757 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 16:19:54.956774 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:19:54.956791 systemd[1]: Detected virtualization kvm. Feb 13 16:19:54.956805 systemd[1]: Detected architecture x86-64. Feb 13 16:19:54.956818 systemd[1]: Running in initrd. Feb 13 16:19:54.956836 systemd[1]: No hostname configured, using default hostname. Feb 13 16:19:54.956850 systemd[1]: Hostname set to . Feb 13 16:19:54.956864 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:19:54.956877 systemd[1]: Queued start job for default target initrd.target. Feb 13 16:19:54.956891 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:19:54.956906 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:19:54.956921 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 16:19:54.956936 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:19:54.956955 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 16:19:54.956969 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 16:19:54.956986 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 16:19:54.957001 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 16:19:54.957017 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:19:54.957034 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:19:54.957047 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:19:54.957057 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:19:54.957067 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:19:54.957079 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:19:54.957089 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:19:54.957099 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:19:54.957112 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 16:19:54.957122 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 16:19:54.957132 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:19:54.957142 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:19:54.957152 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:19:54.957161 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:19:54.957171 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 16:19:54.957181 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:19:54.957193 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 16:19:54.957203 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 16:19:54.957213 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:19:54.957223 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:19:54.957232 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:19:54.957338 systemd-journald[184]: Collecting audit messages is disabled. Feb 13 16:19:54.957367 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 16:19:54.957377 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:19:54.957387 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 16:19:54.957397 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:19:54.957411 systemd-journald[184]: Journal started Feb 13 16:19:54.957432 systemd-journald[184]: Runtime Journal (/run/log/journal/85a3a80e9a8347f99402c3c17765fd12) is 4.9M, max 39.3M, 34.4M free. Feb 13 16:19:54.962103 systemd-modules-load[185]: Inserted module 'overlay' Feb 13 16:19:54.992306 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 16:19:54.992374 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:19:54.993347 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:19:55.000299 kernel: Bridge firewalling registered Feb 13 16:19:55.000621 systemd-modules-load[185]: Inserted module 'br_netfilter' Feb 13 16:19:55.002782 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:19:55.006145 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:19:55.006886 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:19:55.011238 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:19:55.018500 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:19:55.024584 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:19:55.032975 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:19:55.044548 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:19:55.050630 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 16:19:55.052078 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:19:55.053378 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:19:55.062482 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:19:55.079641 dracut-cmdline[213]: dracut-dracut-053 Feb 13 16:19:55.087290 dracut-cmdline[213]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=cd73eba291b8356dfc2c39f651cabef9206685f772c8949188fd366788d672c2 Feb 13 16:19:55.098320 systemd-resolved[218]: Positive Trust Anchors: Feb 13 16:19:55.098336 systemd-resolved[218]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:19:55.098372 systemd-resolved[218]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:19:55.101204 systemd-resolved[218]: Defaulting to hostname 'linux'. Feb 13 16:19:55.102507 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:19:55.103015 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:19:55.187320 kernel: SCSI subsystem initialized Feb 13 16:19:55.201312 kernel: Loading iSCSI transport class v2.0-870. Feb 13 16:19:55.213314 kernel: iscsi: registered transport (tcp) Feb 13 16:19:55.235333 kernel: iscsi: registered transport (qla4xxx) Feb 13 16:19:55.235434 kernel: QLogic iSCSI HBA Driver Feb 13 16:19:55.287533 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 16:19:55.293496 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 16:19:55.323642 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 16:19:55.323730 kernel: device-mapper: uevent: version 1.0.3 Feb 13 16:19:55.325279 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 16:19:55.368361 kernel: raid6: avx2x4 gen() 17011 MB/s Feb 13 16:19:55.385324 kernel: raid6: avx2x2 gen() 17276 MB/s Feb 13 16:19:55.402439 kernel: raid6: avx2x1 gen() 13278 MB/s Feb 13 16:19:55.402522 kernel: raid6: using algorithm avx2x2 gen() 17276 MB/s Feb 13 16:19:55.420505 kernel: raid6: .... xor() 18854 MB/s, rmw enabled Feb 13 16:19:55.420596 kernel: raid6: using avx2x2 recovery algorithm Feb 13 16:19:55.442326 kernel: xor: automatically using best checksumming function avx Feb 13 16:19:55.616333 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 16:19:55.629810 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:19:55.636524 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:19:55.663146 systemd-udevd[401]: Using default interface naming scheme 'v255'. Feb 13 16:19:55.669170 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:19:55.680865 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 16:19:55.695896 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation Feb 13 16:19:55.733936 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:19:55.745659 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:19:55.828331 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:19:55.835926 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 16:19:55.866205 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 16:19:55.868799 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:19:55.870195 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:19:55.871674 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:19:55.877442 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 16:19:55.909343 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:19:55.931327 kernel: scsi host0: Virtio SCSI HBA Feb 13 16:19:55.936349 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 16:19:56.042254 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 16:19:56.043698 kernel: ACPI: bus type USB registered Feb 13 16:19:56.043717 kernel: usbcore: registered new interface driver usbfs Feb 13 16:19:56.043734 kernel: usbcore: registered new interface driver hub Feb 13 16:19:56.043752 kernel: usbcore: registered new device driver usb Feb 13 16:19:56.043769 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 16:19:56.043926 kernel: libata version 3.00 loaded. Feb 13 16:19:56.043948 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 16:19:56.044112 kernel: scsi host1: ata_piix Feb 13 16:19:56.044414 kernel: scsi host2: ata_piix Feb 13 16:19:56.044547 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 16:19:56.044561 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 16:19:56.044573 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 16:19:56.044585 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 16:19:56.044604 kernel: GPT:9289727 != 125829119 Feb 13 16:19:56.044615 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 16:19:56.044627 kernel: GPT:9289727 != 125829119 Feb 13 16:19:56.044639 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 16:19:56.044650 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 16:19:56.044802 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:19:56.044816 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 16:19:56.044935 kernel: AES CTR mode by8 optimization enabled Feb 13 16:19:56.044948 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 16:19:56.045082 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 16:19:56.045196 kernel: hub 1-0:1.0: USB hub found Feb 13 16:19:56.045671 kernel: hub 1-0:1.0: 2 ports detected Feb 13 16:19:56.006068 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:19:56.006186 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:19:56.051575 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 16:19:56.054793 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Feb 13 16:19:56.008361 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:19:56.008930 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:19:56.009123 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:19:56.011433 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:19:56.019343 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:19:56.097531 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:19:56.106512 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 16:19:56.128943 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:19:56.201442 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (451) Feb 13 16:19:56.213298 kernel: BTRFS: device fsid 966d6124-9067-4089-b000-5e99065fe7e2 devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (445) Feb 13 16:19:56.224932 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 16:19:56.232032 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 16:19:56.238520 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 16:19:56.244501 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 16:19:56.245133 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 16:19:56.256756 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 16:19:56.264333 disk-uuid[549]: Primary Header is updated. Feb 13 16:19:56.264333 disk-uuid[549]: Secondary Entries is updated. Feb 13 16:19:56.264333 disk-uuid[549]: Secondary Header is updated. Feb 13 16:19:56.276323 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:19:57.290305 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 16:19:57.291157 disk-uuid[550]: The operation has completed successfully. Feb 13 16:19:57.343937 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 16:19:57.344073 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 16:19:57.352495 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 16:19:57.357396 sh[561]: Success Feb 13 16:19:57.374362 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 16:19:57.445052 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 16:19:57.454460 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 16:19:57.458998 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 16:19:57.488307 kernel: BTRFS info (device dm-0): first mount of filesystem 966d6124-9067-4089-b000-5e99065fe7e2 Feb 13 16:19:57.488386 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:19:57.491118 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 16:19:57.491201 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 16:19:57.492865 kernel: BTRFS info (device dm-0): using free space tree Feb 13 16:19:57.502241 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 16:19:57.503734 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 16:19:57.512568 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 16:19:57.515978 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 16:19:57.527663 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:19:57.527722 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:19:57.528847 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:19:57.533288 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:19:57.545246 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 16:19:57.547328 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:19:57.553237 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 16:19:57.560558 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 16:19:57.664107 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:19:57.671757 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:19:57.713570 ignition[649]: Ignition 2.20.0 Feb 13 16:19:57.713582 ignition[649]: Stage: fetch-offline Feb 13 16:19:57.713635 ignition[649]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:19:57.713646 ignition[649]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:19:57.713753 ignition[649]: parsed url from cmdline: "" Feb 13 16:19:57.713757 ignition[649]: no config URL provided Feb 13 16:19:57.713762 ignition[649]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:19:57.713771 ignition[649]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:19:57.713777 ignition[649]: failed to fetch config: resource requires networking Feb 13 16:19:57.718378 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:19:57.714008 ignition[649]: Ignition finished successfully Feb 13 16:19:57.718991 systemd-networkd[747]: lo: Link UP Feb 13 16:19:57.718998 systemd-networkd[747]: lo: Gained carrier Feb 13 16:19:57.722082 systemd-networkd[747]: Enumeration completed Feb 13 16:19:57.722502 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 16:19:57.722506 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 16:19:57.723511 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:19:57.723516 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 16:19:57.723735 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:19:57.724499 systemd-networkd[747]: eth0: Link UP Feb 13 16:19:57.724505 systemd-networkd[747]: eth0: Gained carrier Feb 13 16:19:57.724536 systemd-networkd[747]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 16:19:57.724663 systemd[1]: Reached target network.target - Network. Feb 13 16:19:57.732892 systemd-networkd[747]: eth1: Link UP Feb 13 16:19:57.732905 systemd-networkd[747]: eth1: Gained carrier Feb 13 16:19:57.732923 systemd-networkd[747]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 16:19:57.733830 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 16:19:57.747377 systemd-networkd[747]: eth0: DHCPv4 address 146.190.141.99/20, gateway 146.190.128.1 acquired from 169.254.169.253 Feb 13 16:19:57.752405 systemd-networkd[747]: eth1: DHCPv4 address 10.124.0.20/20 acquired from 169.254.169.253 Feb 13 16:19:57.754390 ignition[755]: Ignition 2.20.0 Feb 13 16:19:57.754403 ignition[755]: Stage: fetch Feb 13 16:19:57.754620 ignition[755]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:19:57.754633 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:19:57.754739 ignition[755]: parsed url from cmdline: "" Feb 13 16:19:57.754743 ignition[755]: no config URL provided Feb 13 16:19:57.754748 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 16:19:57.754758 ignition[755]: no config at "/usr/lib/ignition/user.ign" Feb 13 16:19:57.754782 ignition[755]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 16:19:57.772053 ignition[755]: GET result: OK Feb 13 16:19:57.772298 ignition[755]: parsing config with SHA512: 50d1ecd57a68f0717cd82ae5f559e3bfdb28cd1929bd2c840b88c4d0768018bdea574df68fd08c03d50bd05af7c86b654b3d7d355a78143bdf677f4bd5dce514 Feb 13 16:19:57.776974 unknown[755]: fetched base config from "system" Feb 13 16:19:57.776988 unknown[755]: fetched base config from "system" Feb 13 16:19:57.777000 unknown[755]: fetched user config from "digitalocean" Feb 13 16:19:57.778826 ignition[755]: fetch: fetch complete Feb 13 16:19:57.778838 ignition[755]: fetch: fetch passed Feb 13 16:19:57.778908 ignition[755]: Ignition finished successfully Feb 13 16:19:57.781650 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 16:19:57.785568 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 16:19:57.809195 ignition[762]: Ignition 2.20.0 Feb 13 16:19:57.809898 ignition[762]: Stage: kargs Feb 13 16:19:57.810112 ignition[762]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:19:57.810124 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:19:57.810867 ignition[762]: kargs: kargs passed Feb 13 16:19:57.810927 ignition[762]: Ignition finished successfully Feb 13 16:19:57.812880 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 16:19:57.820589 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 16:19:57.843281 ignition[768]: Ignition 2.20.0 Feb 13 16:19:57.843293 ignition[768]: Stage: disks Feb 13 16:19:57.843509 ignition[768]: no configs at "/usr/lib/ignition/base.d" Feb 13 16:19:57.843520 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:19:57.844349 ignition[768]: disks: disks passed Feb 13 16:19:57.845623 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 16:19:57.844410 ignition[768]: Ignition finished successfully Feb 13 16:19:57.851129 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 16:19:57.852384 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 16:19:57.853175 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:19:57.854256 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:19:57.855108 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:19:57.865510 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 16:19:57.884204 systemd-fsck[777]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 16:19:57.887233 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 16:19:57.893462 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 16:19:57.998292 kernel: EXT4-fs (vda9): mounted filesystem 85ed0b0d-7f0f-4eeb-80d8-6213e9fcc55d r/w with ordered data mode. Quota mode: none. Feb 13 16:19:57.998877 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 16:19:57.999853 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 16:19:58.012530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:19:58.016424 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 16:19:58.017874 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Feb 13 16:19:58.029328 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (785) Feb 13 16:19:58.030623 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 16:19:58.031483 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 16:19:58.031522 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:19:58.041307 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:19:58.043955 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:19:58.044058 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:19:58.053330 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:19:58.055156 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 16:19:58.066848 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 16:19:58.071398 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:19:58.128556 coreos-metadata[787]: Feb 13 16:19:58.127 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:19:58.139748 coreos-metadata[788]: Feb 13 16:19:58.139 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:19:58.142458 coreos-metadata[787]: Feb 13 16:19:58.142 INFO Fetch successful Feb 13 16:19:58.149220 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Feb 13 16:19:58.150549 initrd-setup-root[815]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 16:19:58.149359 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Feb 13 16:19:58.152945 coreos-metadata[788]: Feb 13 16:19:58.152 INFO Fetch successful Feb 13 16:19:58.157662 coreos-metadata[788]: Feb 13 16:19:58.157 INFO wrote hostname ci-4152.2.1-4-8892aa3964 to /sysroot/etc/hostname Feb 13 16:19:58.159330 initrd-setup-root[823]: cut: /sysroot/etc/group: No such file or directory Feb 13 16:19:58.159965 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:19:58.167917 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 16:19:58.175707 initrd-setup-root[838]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 16:19:58.286717 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 16:19:58.292467 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 16:19:58.306589 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 16:19:58.321319 kernel: BTRFS info (device vda6): last unmount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:19:58.335546 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 16:19:58.353024 ignition[907]: INFO : Ignition 2.20.0 Feb 13 16:19:58.355300 ignition[907]: INFO : Stage: mount Feb 13 16:19:58.355300 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:19:58.355300 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:19:58.357734 ignition[907]: INFO : mount: mount passed Feb 13 16:19:58.357734 ignition[907]: INFO : Ignition finished successfully Feb 13 16:19:58.359246 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 16:19:58.365546 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 16:19:58.488751 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 16:19:58.504679 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 16:19:58.515305 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (917) Feb 13 16:19:58.519278 kernel: BTRFS info (device vda6): first mount of filesystem 83f602a1-06be-4b8b-b461-5e4f70db8da1 Feb 13 16:19:58.519385 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 16:19:58.519400 kernel: BTRFS info (device vda6): using free space tree Feb 13 16:19:58.524594 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 16:19:58.526419 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 16:19:58.565305 ignition[934]: INFO : Ignition 2.20.0 Feb 13 16:19:58.565305 ignition[934]: INFO : Stage: files Feb 13 16:19:58.566649 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:19:58.566649 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:19:58.568075 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Feb 13 16:19:58.568750 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 16:19:58.568750 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 16:19:58.573128 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 16:19:58.573991 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 16:19:58.574699 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 16:19:58.574115 unknown[934]: wrote ssh authorized keys file for user: core Feb 13 16:19:58.576796 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 16:19:58.577539 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 16:19:58.577539 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:19:58.579358 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 16:19:58.579358 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 16:19:58.579358 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 16:19:58.579358 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 16:19:58.579358 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Feb 13 16:19:59.056569 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 16:19:59.081696 systemd-networkd[747]: eth1: Gained IPv6LL Feb 13 16:19:59.335094 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Feb 13 16:19:59.336573 ignition[934]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:19:59.336573 ignition[934]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 16:19:59.336573 ignition[934]: INFO : files: files passed Feb 13 16:19:59.336573 ignition[934]: INFO : Ignition finished successfully Feb 13 16:19:59.336929 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 16:19:59.343512 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 16:19:59.347777 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 16:19:59.358903 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 16:19:59.359105 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 16:19:59.371368 initrd-setup-root-after-ignition[962]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:19:59.371368 initrd-setup-root-after-ignition[962]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:19:59.372718 initrd-setup-root-after-ignition[966]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 16:19:59.372917 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:19:59.374854 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 16:19:59.380548 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 16:19:59.425571 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 16:19:59.425714 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 16:19:59.426650 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 16:19:59.427216 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 16:19:59.428147 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 16:19:59.429339 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 16:19:59.450260 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:19:59.458544 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 16:19:59.469290 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:19:59.469863 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:19:59.470522 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 16:19:59.471347 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 16:19:59.471612 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 16:19:59.472438 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 16:19:59.472915 systemd[1]: Stopped target basic.target - Basic System. Feb 13 16:19:59.473615 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 16:19:59.474391 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 16:19:59.475127 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 16:19:59.475878 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 16:19:59.476684 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 16:19:59.477417 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 16:19:59.478158 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 16:19:59.478857 systemd[1]: Stopped target swap.target - Swaps. Feb 13 16:19:59.479545 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 16:19:59.479675 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 16:19:59.480791 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:19:59.481529 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:19:59.482204 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 16:19:59.482339 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:19:59.482790 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 16:19:59.482909 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 16:19:59.483898 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 16:19:59.484056 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 16:19:59.484799 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 16:19:59.484897 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 16:19:59.485689 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 16:19:59.485920 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 16:19:59.495534 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 16:19:59.499529 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 16:19:59.500386 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 16:19:59.500577 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:19:59.501677 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 16:19:59.501794 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 16:19:59.509145 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 16:19:59.510228 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 16:19:59.519373 ignition[986]: INFO : Ignition 2.20.0 Feb 13 16:19:59.519373 ignition[986]: INFO : Stage: umount Feb 13 16:19:59.521410 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 16:19:59.521410 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 16:19:59.521410 ignition[986]: INFO : umount: umount passed Feb 13 16:19:59.521410 ignition[986]: INFO : Ignition finished successfully Feb 13 16:19:59.522183 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 16:19:59.522307 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 16:19:59.525042 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 16:19:59.525183 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 16:19:59.526636 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 16:19:59.526696 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 16:19:59.527146 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 16:19:59.527198 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 16:19:59.527589 systemd[1]: Stopped target network.target - Network. Feb 13 16:19:59.527859 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 16:19:59.527904 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 16:19:59.530374 systemd-networkd[747]: eth0: Gained IPv6LL Feb 13 16:19:59.530782 systemd[1]: Stopped target paths.target - Path Units. Feb 13 16:19:59.531550 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 16:19:59.532009 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:19:59.534345 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 16:19:59.534773 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 16:19:59.535165 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 16:19:59.535216 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 16:19:59.535675 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 16:19:59.535731 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 16:19:59.536097 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 16:19:59.536148 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 16:19:59.538939 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 16:19:59.539037 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 16:19:59.557458 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 16:19:59.557862 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 16:19:59.561509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 16:19:59.562412 systemd-networkd[747]: eth0: DHCPv6 lease lost Feb 13 16:19:59.564136 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 16:19:59.564239 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 16:19:59.568139 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 16:19:59.568255 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 16:19:59.568366 systemd-networkd[747]: eth1: DHCPv6 lease lost Feb 13 16:19:59.571019 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 16:19:59.571186 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 16:19:59.574069 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 16:19:59.574130 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:19:59.574898 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 16:19:59.575020 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 16:19:59.580428 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 16:19:59.581535 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 16:19:59.582072 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 16:19:59.583326 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 16:19:59.583383 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:19:59.584326 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 16:19:59.584375 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 16:19:59.584988 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 16:19:59.585026 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:19:59.585856 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:19:59.597884 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 16:19:59.598008 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 16:19:59.599741 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 16:19:59.599894 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:19:59.601218 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 16:19:59.601351 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 16:19:59.602029 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 16:19:59.602065 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:19:59.602847 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 16:19:59.602893 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 16:19:59.603974 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 16:19:59.604019 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 16:19:59.604685 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 16:19:59.604730 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 16:19:59.610575 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 16:19:59.611656 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 16:19:59.611743 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:19:59.612243 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 16:19:59.612303 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:19:59.612733 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 16:19:59.612778 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:19:59.613153 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:19:59.613196 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:19:59.618786 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 16:19:59.618887 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 16:19:59.619951 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 16:19:59.622577 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 16:19:59.641956 systemd[1]: Switching root. Feb 13 16:19:59.671114 systemd-journald[184]: Journal stopped Feb 13 16:20:01.397074 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Feb 13 16:20:01.397210 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 16:20:01.397238 kernel: SELinux: policy capability open_perms=1 Feb 13 16:20:01.397258 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 16:20:01.397317 kernel: SELinux: policy capability always_check_network=0 Feb 13 16:20:01.398489 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 16:20:01.398533 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 16:20:01.398553 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 16:20:01.398580 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 16:20:01.398608 kernel: audit: type=1403 audit(1739463599.847:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 16:20:01.398633 systemd[1]: Successfully loaded SELinux policy in 45.952ms. Feb 13 16:20:01.398669 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 18.455ms. Feb 13 16:20:01.398703 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 16:20:01.398724 systemd[1]: Detected virtualization kvm. Feb 13 16:20:01.398744 systemd[1]: Detected architecture x86-64. Feb 13 16:20:01.398769 systemd[1]: Detected first boot. Feb 13 16:20:01.398804 systemd[1]: Hostname set to . Feb 13 16:20:01.398829 systemd[1]: Initializing machine ID from VM UUID. Feb 13 16:20:01.398853 zram_generator::config[1028]: No configuration found. Feb 13 16:20:01.398878 systemd[1]: Populated /etc with preset unit settings. Feb 13 16:20:01.398900 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 16:20:01.398920 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 16:20:01.398944 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 16:20:01.405674 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 16:20:01.405747 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 16:20:01.405785 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 16:20:01.405810 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 16:20:01.405832 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 16:20:01.405854 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 16:20:01.405879 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 16:20:01.405908 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 16:20:01.405930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 16:20:01.405957 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 16:20:01.405979 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 16:20:01.406007 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 16:20:01.406032 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 16:20:01.406054 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 16:20:01.406077 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 16:20:01.406096 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 16:20:01.406115 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 16:20:01.406162 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 16:20:01.406193 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 16:20:01.406215 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 16:20:01.406237 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 16:20:01.406300 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 16:20:01.406325 systemd[1]: Reached target slices.target - Slice Units. Feb 13 16:20:01.406346 systemd[1]: Reached target swap.target - Swaps. Feb 13 16:20:01.406370 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 16:20:01.406391 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 16:20:01.406420 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 16:20:01.406445 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 16:20:01.406465 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 16:20:01.406490 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 16:20:01.406514 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 16:20:01.406546 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 16:20:01.406570 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 16:20:01.406592 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:01.406613 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 16:20:01.406636 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 16:20:01.406656 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 16:20:01.406680 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 16:20:01.406706 systemd[1]: Reached target machines.target - Containers. Feb 13 16:20:01.406730 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 16:20:01.406754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:20:01.406777 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 16:20:01.406798 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 16:20:01.406823 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:20:01.406843 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:20:01.406863 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:20:01.406881 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 16:20:01.406902 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:20:01.406923 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 16:20:01.406944 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 16:20:01.406983 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 16:20:01.407010 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 16:20:01.407030 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 16:20:01.407055 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 16:20:01.407073 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 16:20:01.407092 kernel: loop: module loaded Feb 13 16:20:01.407118 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 16:20:01.407138 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 16:20:01.407158 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 16:20:01.407177 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 16:20:01.407197 systemd[1]: Stopped verity-setup.service. Feb 13 16:20:01.407224 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:01.407244 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 16:20:01.407284 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 16:20:01.407304 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 16:20:01.407323 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 16:20:01.407348 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 16:20:01.407371 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 16:20:01.407397 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 16:20:01.407424 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 16:20:01.407446 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 16:20:01.407560 systemd-journald[1097]: Collecting audit messages is disabled. Feb 13 16:20:01.407610 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:20:01.407632 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:20:01.407651 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:20:01.407674 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:20:01.407693 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:20:01.407711 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:20:01.407744 systemd-journald[1097]: Journal started Feb 13 16:20:01.407787 systemd-journald[1097]: Runtime Journal (/run/log/journal/85a3a80e9a8347f99402c3c17765fd12) is 4.9M, max 39.3M, 34.4M free. Feb 13 16:20:00.850487 systemd[1]: Queued start job for default target multi-user.target. Feb 13 16:20:00.882024 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 16:20:00.891874 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 16:20:01.411365 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 16:20:01.438392 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 16:20:01.454865 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 16:20:01.501478 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 16:20:01.516300 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 16:20:01.547826 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 16:20:01.549083 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 16:20:01.549158 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 16:20:01.559820 kernel: fuse: init (API version 7.39) Feb 13 16:20:01.559144 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 16:20:01.579867 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 16:20:01.585645 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 16:20:01.588344 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:20:01.601565 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 16:20:01.608547 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 16:20:01.609649 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:20:01.630598 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 16:20:01.631542 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:20:01.662359 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 16:20:01.665578 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 16:20:01.669687 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 16:20:01.674553 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 16:20:01.675377 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 16:20:01.678793 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 16:20:01.680013 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 16:20:01.711651 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 16:20:01.778925 kernel: ACPI: bus type drm_connector registered Feb 13 16:20:01.768068 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:20:01.768363 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:20:01.806400 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 16:20:01.893849 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 16:20:01.895709 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 16:20:01.898343 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 16:20:01.921779 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 16:20:01.937337 systemd-journald[1097]: Time spent on flushing to /var/log/journal/85a3a80e9a8347f99402c3c17765fd12 is 240.090ms for 974 entries. Feb 13 16:20:01.937337 systemd-journald[1097]: System Journal (/var/log/journal/85a3a80e9a8347f99402c3c17765fd12) is 8.0M, max 195.6M, 187.6M free. Feb 13 16:20:02.227789 systemd-journald[1097]: Received client request to flush runtime journal. Feb 13 16:20:02.229801 kernel: loop0: detected capacity change from 0 to 140992 Feb 13 16:20:02.243357 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 16:20:02.243568 kernel: loop1: detected capacity change from 0 to 8 Feb 13 16:20:02.243618 kernel: loop2: detected capacity change from 0 to 205544 Feb 13 16:20:02.055866 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 16:20:02.058336 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 16:20:02.069164 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 16:20:02.121905 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Feb 13 16:20:02.121931 systemd-tmpfiles[1142]: ACLs are not supported, ignoring. Feb 13 16:20:02.198253 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 16:20:02.212941 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 16:20:02.248399 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 16:20:02.269955 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 16:20:02.295119 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 16:20:02.346920 kernel: loop3: detected capacity change from 0 to 138184 Feb 13 16:20:02.357673 udevadm[1169]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 16:20:02.421724 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 16:20:02.434730 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 16:20:02.474347 kernel: loop4: detected capacity change from 0 to 140992 Feb 13 16:20:02.545294 kernel: loop5: detected capacity change from 0 to 8 Feb 13 16:20:02.559302 kernel: loop6: detected capacity change from 0 to 205544 Feb 13 16:20:02.581456 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 16:20:02.581489 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Feb 13 16:20:02.610351 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 16:20:02.646937 kernel: loop7: detected capacity change from 0 to 138184 Feb 13 16:20:02.685678 (sd-merge)[1174]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 16:20:02.686569 (sd-merge)[1174]: Merged extensions into '/usr'. Feb 13 16:20:02.698835 systemd[1]: Reloading requested from client PID 1141 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 16:20:02.698862 systemd[1]: Reloading... Feb 13 16:20:02.987317 zram_generator::config[1202]: No configuration found. Feb 13 16:20:03.205333 ldconfig[1133]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 16:20:03.363569 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:20:03.462106 systemd[1]: Reloading finished in 762 ms. Feb 13 16:20:03.518948 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 16:20:03.520699 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 16:20:03.538706 systemd[1]: Starting ensure-sysext.service... Feb 13 16:20:03.551532 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 16:20:03.574314 systemd[1]: Reloading requested from client PID 1245 ('systemctl') (unit ensure-sysext.service)... Feb 13 16:20:03.576345 systemd[1]: Reloading... Feb 13 16:20:03.635073 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 16:20:03.640626 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 16:20:03.643127 systemd-tmpfiles[1246]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 16:20:03.643874 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Feb 13 16:20:03.644127 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Feb 13 16:20:03.659875 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:20:03.659900 systemd-tmpfiles[1246]: Skipping /boot Feb 13 16:20:03.703302 zram_generator::config[1269]: No configuration found. Feb 13 16:20:03.732897 systemd-tmpfiles[1246]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 16:20:03.732920 systemd-tmpfiles[1246]: Skipping /boot Feb 13 16:20:03.944753 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:20:04.041882 systemd[1]: Reloading finished in 465 ms. Feb 13 16:20:04.065580 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 16:20:04.073954 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 16:20:04.101857 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:20:04.121418 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 16:20:04.130843 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 16:20:04.144467 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 16:20:04.156670 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 16:20:04.167924 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 16:20:04.174931 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:04.175337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:20:04.181446 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:20:04.191819 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:20:04.198826 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:20:04.199720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:20:04.199930 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:04.210304 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 16:20:04.217037 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:04.217625 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:20:04.217858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:20:04.218010 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:04.218899 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:20:04.221561 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:20:04.239942 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:04.240409 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:20:04.251815 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:20:04.263670 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 16:20:04.266006 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:20:04.267206 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:04.279890 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 16:20:04.283883 systemd[1]: Finished ensure-sysext.service. Feb 13 16:20:04.289167 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:20:04.290467 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:20:04.293219 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:20:04.294610 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:20:04.307038 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:20:04.322155 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 16:20:04.322825 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:20:04.323580 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:20:04.323892 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:20:04.333870 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:20:04.335954 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 16:20:04.337413 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 16:20:04.344324 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 16:20:04.362564 systemd-udevd[1323]: Using default interface naming scheme 'v255'. Feb 13 16:20:04.401031 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 16:20:04.410665 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 16:20:04.448874 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 16:20:04.457574 augenrules[1360]: No rules Feb 13 16:20:04.464357 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 16:20:04.466002 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:20:04.467474 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:20:04.468656 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 16:20:04.471229 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 16:20:04.602382 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1364) Feb 13 16:20:04.696497 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 16:20:04.697003 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:04.697251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 16:20:04.715649 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 16:20:04.727596 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 16:20:04.733670 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 16:20:04.736547 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 16:20:04.736628 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 16:20:04.736657 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 16:20:04.737448 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 16:20:04.737663 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 16:20:04.759505 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 16:20:04.792064 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 16:20:04.795605 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 16:20:04.797474 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 16:20:04.807822 systemd-networkd[1367]: lo: Link UP Feb 13 16:20:04.817474 systemd-networkd[1367]: lo: Gained carrier Feb 13 16:20:04.825551 systemd-networkd[1367]: Enumeration completed Feb 13 16:20:04.825813 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 16:20:04.826108 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-66:b3:75:b2:5e:5d.network. Feb 13 16:20:04.830153 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 16:20:04.829467 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-42:19:7c:2c:81:bc.network. Feb 13 16:20:04.830528 systemd-networkd[1367]: eth0: Link UP Feb 13 16:20:04.830612 systemd-networkd[1367]: eth0: Gained carrier Feb 13 16:20:04.835760 systemd-networkd[1367]: eth1: Link UP Feb 13 16:20:04.836045 systemd-networkd[1367]: eth1: Gained carrier Feb 13 16:20:04.839718 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 16:20:04.846767 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 16:20:04.850044 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 16:20:04.850341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 16:20:04.882429 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 16:20:04.902343 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 16:20:04.903161 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 16:20:04.912419 systemd-resolved[1321]: Positive Trust Anchors: Feb 13 16:20:04.912442 systemd-resolved[1321]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 16:20:04.912496 systemd-resolved[1321]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 16:20:04.923456 systemd-resolved[1321]: Using system hostname 'ci-4152.2.1-4-8892aa3964'. Feb 13 16:20:04.927249 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 16:20:04.928011 systemd[1]: Reached target network.target - Network. Feb 13 16:20:04.930460 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 16:20:04.988013 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 16:20:04.998660 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 16:20:05.026624 systemd-timesyncd[1342]: Contacted time server 23.150.40.242:123 (0.flatcar.pool.ntp.org). Feb 13 16:20:05.026760 systemd-timesyncd[1342]: Initial clock synchronization to Thu 2025-02-13 16:20:05.316740 UTC. Feb 13 16:20:05.033298 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 16:20:05.029993 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 16:20:05.095084 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Feb 13 16:20:05.095184 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Feb 13 16:20:05.106315 kernel: ACPI: button: Power Button [PWRF] Feb 13 16:20:05.173787 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 16:20:05.184048 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:20:05.190337 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 16:20:05.192325 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 16:20:05.219644 kernel: Console: switching to colour dummy device 80x25 Feb 13 16:20:05.219761 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 16:20:05.219787 kernel: [drm] features: -context_init Feb 13 16:20:05.226086 kernel: [drm] number of scanouts: 1 Feb 13 16:20:05.236035 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:20:05.236347 kernel: [drm] number of cap sets: 0 Feb 13 16:20:05.236297 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:20:05.238519 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 16:20:05.247136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:20:05.263204 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 16:20:05.263526 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 16:20:05.278307 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 16:20:05.304138 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 16:20:05.304493 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:20:05.322732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 16:20:05.462682 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 16:20:05.481391 kernel: EDAC MC: Ver: 3.0.0 Feb 13 16:20:05.515702 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 16:20:05.525934 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 16:20:05.558321 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:20:05.599234 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 16:20:05.600893 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 16:20:05.601054 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 16:20:05.601367 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 16:20:05.601718 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 16:20:05.603469 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 16:20:05.604136 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 16:20:05.605447 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 16:20:05.605596 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 16:20:05.605648 systemd[1]: Reached target paths.target - Path Units. Feb 13 16:20:05.606088 systemd[1]: Reached target timers.target - Timer Units. Feb 13 16:20:05.610378 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 16:20:05.613412 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 16:20:05.632245 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 16:20:05.635647 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 16:20:05.642826 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 16:20:05.647889 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 16:20:05.648771 systemd[1]: Reached target basic.target - Basic System. Feb 13 16:20:05.649862 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:20:05.649906 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 16:20:05.661645 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 16:20:05.666494 lvm[1439]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 16:20:05.678655 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 16:20:05.695623 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 16:20:05.707692 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 16:20:05.725656 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 16:20:05.730092 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 16:20:05.740608 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 16:20:05.748482 jq[1443]: false Feb 13 16:20:05.751611 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 16:20:05.767400 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 16:20:05.783657 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 16:20:05.788375 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 16:20:05.789347 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 16:20:05.800582 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 16:20:05.813468 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 16:20:05.822710 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 16:20:05.823514 dbus-daemon[1442]: [system] SELinux support is enabled Feb 13 16:20:05.831651 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 16:20:05.840551 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 16:20:05.842419 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 16:20:05.850907 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 16:20:05.851327 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 16:20:05.869498 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 16:20:05.869597 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 16:20:05.891390 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 16:20:05.891608 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 16:20:05.891885 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 16:20:05.904768 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 16:20:05.905090 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 16:20:05.947150 jq[1453]: true Feb 13 16:20:05.954317 extend-filesystems[1444]: Found loop4 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found loop5 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found loop6 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found loop7 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found vda Feb 13 16:20:05.954317 extend-filesystems[1444]: Found vda1 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found vda2 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found vda3 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found usr Feb 13 16:20:05.954317 extend-filesystems[1444]: Found vda4 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found vda6 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found vda7 Feb 13 16:20:05.954317 extend-filesystems[1444]: Found vda9 Feb 13 16:20:05.954317 extend-filesystems[1444]: Checking size of /dev/vda9 Feb 13 16:20:05.970947 (ntainerd)[1468]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 16:20:06.088721 update_engine[1451]: I20250213 16:20:06.025997 1451 main.cc:92] Flatcar Update Engine starting Feb 13 16:20:06.088721 update_engine[1451]: I20250213 16:20:06.047660 1451 update_check_scheduler.cc:74] Next update check in 5m46s Feb 13 16:20:06.089047 coreos-metadata[1441]: Feb 13 16:20:05.992 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:20:06.089047 coreos-metadata[1441]: Feb 13 16:20:06.008 INFO Fetch successful Feb 13 16:20:06.046593 systemd[1]: Started update-engine.service - Update Engine. Feb 13 16:20:06.063802 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 16:20:06.089581 jq[1471]: true Feb 13 16:20:06.134390 extend-filesystems[1444]: Resized partition /dev/vda9 Feb 13 16:20:06.122579 systemd-networkd[1367]: eth0: Gained IPv6LL Feb 13 16:20:06.147754 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1389) Feb 13 16:20:06.147804 extend-filesystems[1482]: resize2fs 1.47.1 (20-May-2024) Feb 13 16:20:06.131453 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 16:20:06.149716 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 16:20:06.172377 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 16:20:06.173509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:20:06.186707 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 16:20:06.249667 systemd-networkd[1367]: eth1: Gained IPv6LL Feb 13 16:20:06.275124 locksmithd[1475]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 16:20:06.293248 systemd-logind[1450]: New seat seat0. Feb 13 16:20:06.304474 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 16:20:06.317137 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 16:20:06.335259 systemd-logind[1450]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 16:20:06.335328 systemd-logind[1450]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 16:20:06.335791 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 16:20:06.439439 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 16:20:06.442866 bash[1512]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:20:06.449367 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 16:20:06.466908 systemd[1]: Starting sshkeys.service... Feb 13 16:20:06.501639 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 16:20:06.516142 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 16:20:06.533042 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 16:20:06.574267 extend-filesystems[1482]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 16:20:06.574267 extend-filesystems[1482]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 16:20:06.574267 extend-filesystems[1482]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 16:20:06.608858 extend-filesystems[1444]: Resized filesystem in /dev/vda9 Feb 13 16:20:06.608858 extend-filesystems[1444]: Found vdb Feb 13 16:20:06.576791 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 16:20:06.577002 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 16:20:06.718855 coreos-metadata[1521]: Feb 13 16:20:06.715 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 16:20:06.734323 coreos-metadata[1521]: Feb 13 16:20:06.731 INFO Fetch successful Feb 13 16:20:06.756687 unknown[1521]: wrote ssh authorized keys file for user: core Feb 13 16:20:06.812227 update-ssh-keys[1528]: Updated "/home/core/.ssh/authorized_keys" Feb 13 16:20:06.813605 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 16:20:06.829577 systemd[1]: Finished sshkeys.service. Feb 13 16:20:06.846625 sshd_keygen[1467]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 16:20:06.932232 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 16:20:06.944544 containerd[1468]: time="2025-02-13T16:20:06.942971016Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Feb 13 16:20:06.946514 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 16:20:06.991553 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 16:20:06.991943 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 16:20:07.004163 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 16:20:07.010814 containerd[1468]: time="2025-02-13T16:20:07.010437499Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:20:07.015814 containerd[1468]: time="2025-02-13T16:20:07.015736095Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:20:07.018334 containerd[1468]: time="2025-02-13T16:20:07.016504068Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 16:20:07.018334 containerd[1468]: time="2025-02-13T16:20:07.016591191Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 16:20:07.018334 containerd[1468]: time="2025-02-13T16:20:07.016911691Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 16:20:07.018334 containerd[1468]: time="2025-02-13T16:20:07.016948302Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 16:20:07.018334 containerd[1468]: time="2025-02-13T16:20:07.017057702Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:20:07.018334 containerd[1468]: time="2025-02-13T16:20:07.017090696Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:20:07.018938 containerd[1468]: time="2025-02-13T16:20:07.018894829Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:20:07.019051 containerd[1468]: time="2025-02-13T16:20:07.019029379Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 16:20:07.019141 containerd[1468]: time="2025-02-13T16:20:07.019121750Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:20:07.019226 containerd[1468]: time="2025-02-13T16:20:07.019208337Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 16:20:07.020327 containerd[1468]: time="2025-02-13T16:20:07.019524805Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:20:07.021011 containerd[1468]: time="2025-02-13T16:20:07.020965828Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 16:20:07.021932 containerd[1468]: time="2025-02-13T16:20:07.021886946Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 16:20:07.022098 containerd[1468]: time="2025-02-13T16:20:07.022072496Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 16:20:07.023281 containerd[1468]: time="2025-02-13T16:20:07.023242076Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 16:20:07.023592 containerd[1468]: time="2025-02-13T16:20:07.023558972Z" level=info msg="metadata content store policy set" policy=shared Feb 13 16:20:07.031055 containerd[1468]: time="2025-02-13T16:20:07.030822433Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 16:20:07.031055 containerd[1468]: time="2025-02-13T16:20:07.030962627Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 16:20:07.031804 containerd[1468]: time="2025-02-13T16:20:07.031363487Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 16:20:07.031804 containerd[1468]: time="2025-02-13T16:20:07.031409357Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 16:20:07.031804 containerd[1468]: time="2025-02-13T16:20:07.031434416Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 16:20:07.031804 containerd[1468]: time="2025-02-13T16:20:07.031697287Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032497167Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032767971Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032801856Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032827685Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032850192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032871851Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032893088Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032913965Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032935376Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032954715Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032973671Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.032990995Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.033051924Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.033341 containerd[1468]: time="2025-02-13T16:20:07.033079806Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.033101545Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.033124674Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.033146269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.033167342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.033184782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.033206966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.033224676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.033271284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.034537617Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.034581921Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.034607824Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.034633654Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.034677746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.034702569Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.035404 containerd[1468]: time="2025-02-13T16:20:07.034719076Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 16:20:07.036027 containerd[1468]: time="2025-02-13T16:20:07.034795747Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 16:20:07.036027 containerd[1468]: time="2025-02-13T16:20:07.034826312Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 16:20:07.036027 containerd[1468]: time="2025-02-13T16:20:07.034846657Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 16:20:07.036027 containerd[1468]: time="2025-02-13T16:20:07.034865817Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 16:20:07.036027 containerd[1468]: time="2025-02-13T16:20:07.034927028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.036027 containerd[1468]: time="2025-02-13T16:20:07.034957777Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 16:20:07.036027 containerd[1468]: time="2025-02-13T16:20:07.034974631Z" level=info msg="NRI interface is disabled by configuration." Feb 13 16:20:07.036027 containerd[1468]: time="2025-02-13T16:20:07.034992949Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 16:20:07.037408 containerd[1468]: time="2025-02-13T16:20:07.036668851Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 16:20:07.037408 containerd[1468]: time="2025-02-13T16:20:07.036762806Z" level=info msg="Connect containerd service" Feb 13 16:20:07.037408 containerd[1468]: time="2025-02-13T16:20:07.036843938Z" level=info msg="using legacy CRI server" Feb 13 16:20:07.037408 containerd[1468]: time="2025-02-13T16:20:07.036857287Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 16:20:07.037408 containerd[1468]: time="2025-02-13T16:20:07.037057370Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 16:20:07.039052 containerd[1468]: time="2025-02-13T16:20:07.038957089Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 16:20:07.039695 containerd[1468]: time="2025-02-13T16:20:07.039305652Z" level=info msg="Start subscribing containerd event" Feb 13 16:20:07.039695 containerd[1468]: time="2025-02-13T16:20:07.039380513Z" level=info msg="Start recovering state" Feb 13 16:20:07.039695 containerd[1468]: time="2025-02-13T16:20:07.039490384Z" level=info msg="Start event monitor" Feb 13 16:20:07.039695 containerd[1468]: time="2025-02-13T16:20:07.039515757Z" level=info msg="Start snapshots syncer" Feb 13 16:20:07.039695 containerd[1468]: time="2025-02-13T16:20:07.039531484Z" level=info msg="Start cni network conf syncer for default" Feb 13 16:20:07.039695 containerd[1468]: time="2025-02-13T16:20:07.039541992Z" level=info msg="Start streaming server" Feb 13 16:20:07.040892 containerd[1468]: time="2025-02-13T16:20:07.040721149Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 16:20:07.040892 containerd[1468]: time="2025-02-13T16:20:07.040835329Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 16:20:07.042054 containerd[1468]: time="2025-02-13T16:20:07.041370712Z" level=info msg="containerd successfully booted in 0.099773s" Feb 13 16:20:07.041517 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 16:20:07.067879 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 16:20:07.084974 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 16:20:07.108699 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 16:20:07.109934 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 16:20:07.547707 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 16:20:07.560785 systemd[1]: Started sshd@0-146.190.141.99:22-139.178.89.65:37906.service - OpenSSH per-connection server daemon (139.178.89.65:37906). Feb 13 16:20:07.697463 sshd[1553]: Accepted publickey for core from 139.178.89.65 port 37906 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:20:07.700949 sshd-session[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:20:07.722256 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 16:20:07.733855 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 16:20:07.747615 systemd-logind[1450]: New session 1 of user core. Feb 13 16:20:07.775619 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 16:20:07.790865 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 16:20:07.805835 (systemd)[1557]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 16:20:07.984054 systemd[1557]: Queued start job for default target default.target. Feb 13 16:20:07.990227 systemd[1557]: Created slice app.slice - User Application Slice. Feb 13 16:20:07.990277 systemd[1557]: Reached target paths.target - Paths. Feb 13 16:20:07.990343 systemd[1557]: Reached target timers.target - Timers. Feb 13 16:20:07.994421 systemd[1557]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 16:20:08.034490 systemd[1557]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 16:20:08.034650 systemd[1557]: Reached target sockets.target - Sockets. Feb 13 16:20:08.034673 systemd[1557]: Reached target basic.target - Basic System. Feb 13 16:20:08.034732 systemd[1557]: Reached target default.target - Main User Target. Feb 13 16:20:08.034773 systemd[1557]: Startup finished in 216ms. Feb 13 16:20:08.035354 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 16:20:08.046667 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 16:20:08.119549 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:20:08.122956 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 16:20:08.134089 (kubelet)[1572]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 16:20:08.134841 systemd[1]: Started sshd@1-146.190.141.99:22-139.178.89.65:37922.service - OpenSSH per-connection server daemon (139.178.89.65:37922). Feb 13 16:20:08.143162 systemd[1]: Startup finished in 1.085s (kernel) + 5.142s (initrd) + 8.340s (userspace) = 14.567s. Feb 13 16:20:08.229474 sshd[1574]: Accepted publickey for core from 139.178.89.65 port 37922 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:20:08.231887 sshd-session[1574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:20:08.240947 systemd-logind[1450]: New session 2 of user core. Feb 13 16:20:08.245649 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 16:20:08.313079 sshd[1580]: Connection closed by 139.178.89.65 port 37922 Feb 13 16:20:08.314610 sshd-session[1574]: pam_unix(sshd:session): session closed for user core Feb 13 16:20:08.328347 systemd[1]: sshd@1-146.190.141.99:22-139.178.89.65:37922.service: Deactivated successfully. Feb 13 16:20:08.332586 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 16:20:08.334704 systemd-logind[1450]: Session 2 logged out. Waiting for processes to exit. Feb 13 16:20:08.341848 systemd[1]: Started sshd@2-146.190.141.99:22-139.178.89.65:37928.service - OpenSSH per-connection server daemon (139.178.89.65:37928). Feb 13 16:20:08.343828 systemd-logind[1450]: Removed session 2. Feb 13 16:20:08.411434 sshd[1585]: Accepted publickey for core from 139.178.89.65 port 37928 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:20:08.413905 sshd-session[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:20:08.424005 systemd-logind[1450]: New session 3 of user core. Feb 13 16:20:08.432684 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 16:20:08.494845 sshd[1591]: Connection closed by 139.178.89.65 port 37928 Feb 13 16:20:08.495506 sshd-session[1585]: pam_unix(sshd:session): session closed for user core Feb 13 16:20:08.506219 systemd[1]: sshd@2-146.190.141.99:22-139.178.89.65:37928.service: Deactivated successfully. Feb 13 16:20:08.508173 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 16:20:08.508889 systemd-logind[1450]: Session 3 logged out. Waiting for processes to exit. Feb 13 16:20:08.513623 systemd[1]: Started sshd@3-146.190.141.99:22-139.178.89.65:37942.service - OpenSSH per-connection server daemon (139.178.89.65:37942). Feb 13 16:20:08.516863 systemd-logind[1450]: Removed session 3. Feb 13 16:20:08.575099 sshd[1596]: Accepted publickey for core from 139.178.89.65 port 37942 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:20:08.577018 sshd-session[1596]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:20:08.582365 systemd-logind[1450]: New session 4 of user core. Feb 13 16:20:08.587533 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 16:20:08.656460 sshd[1598]: Connection closed by 139.178.89.65 port 37942 Feb 13 16:20:08.657543 sshd-session[1596]: pam_unix(sshd:session): session closed for user core Feb 13 16:20:08.667857 systemd[1]: sshd@3-146.190.141.99:22-139.178.89.65:37942.service: Deactivated successfully. Feb 13 16:20:08.669750 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 16:20:08.671637 systemd-logind[1450]: Session 4 logged out. Waiting for processes to exit. Feb 13 16:20:08.682869 systemd[1]: Started sshd@4-146.190.141.99:22-139.178.89.65:37948.service - OpenSSH per-connection server daemon (139.178.89.65:37948). Feb 13 16:20:08.686889 systemd-logind[1450]: Removed session 4. Feb 13 16:20:08.737066 sshd[1603]: Accepted publickey for core from 139.178.89.65 port 37948 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:20:08.738243 sshd-session[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:20:08.745617 systemd-logind[1450]: New session 5 of user core. Feb 13 16:20:08.755639 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 16:20:08.833578 sudo[1606]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 16:20:08.833917 sudo[1606]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:20:08.846323 sudo[1606]: pam_unix(sudo:session): session closed for user root Feb 13 16:20:08.849458 sshd[1605]: Connection closed by 139.178.89.65 port 37948 Feb 13 16:20:08.852165 sshd-session[1603]: pam_unix(sshd:session): session closed for user core Feb 13 16:20:08.861610 systemd[1]: sshd@4-146.190.141.99:22-139.178.89.65:37948.service: Deactivated successfully. Feb 13 16:20:08.865673 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 16:20:08.868589 systemd-logind[1450]: Session 5 logged out. Waiting for processes to exit. Feb 13 16:20:08.877751 systemd[1]: Started sshd@5-146.190.141.99:22-139.178.89.65:37958.service - OpenSSH per-connection server daemon (139.178.89.65:37958). Feb 13 16:20:08.880420 systemd-logind[1450]: Removed session 5. Feb 13 16:20:08.940694 sshd[1611]: Accepted publickey for core from 139.178.89.65 port 37958 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:20:08.942982 sshd-session[1611]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:20:08.950421 systemd-logind[1450]: New session 6 of user core. Feb 13 16:20:08.959279 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 16:20:09.031546 sudo[1617]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 16:20:09.032391 sudo[1617]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:20:09.037880 sudo[1617]: pam_unix(sudo:session): session closed for user root Feb 13 16:20:09.045384 sudo[1616]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Feb 13 16:20:09.046074 sudo[1616]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:20:09.068351 kubelet[1572]: E0213 16:20:09.067734 1572 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 16:20:09.068099 systemd[1]: Starting audit-rules.service - Load Audit Rules... Feb 13 16:20:09.071024 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 16:20:09.071174 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 16:20:09.071698 systemd[1]: kubelet.service: Consumed 1.294s CPU time. Feb 13 16:20:09.113214 augenrules[1640]: No rules Feb 13 16:20:09.114087 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 16:20:09.114367 systemd[1]: Finished audit-rules.service - Load Audit Rules. Feb 13 16:20:09.117628 sudo[1616]: pam_unix(sudo:session): session closed for user root Feb 13 16:20:09.121153 sshd[1613]: Connection closed by 139.178.89.65 port 37958 Feb 13 16:20:09.122087 sshd-session[1611]: pam_unix(sshd:session): session closed for user core Feb 13 16:20:09.132850 systemd[1]: sshd@5-146.190.141.99:22-139.178.89.65:37958.service: Deactivated successfully. Feb 13 16:20:09.135259 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 16:20:09.137496 systemd-logind[1450]: Session 6 logged out. Waiting for processes to exit. Feb 13 16:20:09.144813 systemd[1]: Started sshd@6-146.190.141.99:22-139.178.89.65:37970.service - OpenSSH per-connection server daemon (139.178.89.65:37970). Feb 13 16:20:09.147425 systemd-logind[1450]: Removed session 6. Feb 13 16:20:09.200124 sshd[1648]: Accepted publickey for core from 139.178.89.65 port 37970 ssh2: RSA SHA256:AMPu2lZjn4SqDYANHPtTget7vBQBooUjf0mriNIzIUY Feb 13 16:20:09.202197 sshd-session[1648]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 16:20:09.211162 systemd-logind[1450]: New session 7 of user core. Feb 13 16:20:09.217679 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 16:20:09.281698 sudo[1651]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 16:20:09.282196 sudo[1651]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 16:20:10.203174 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:20:10.203569 systemd[1]: kubelet.service: Consumed 1.294s CPU time. Feb 13 16:20:10.217733 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:20:10.259508 systemd[1]: Reloading requested from client PID 1684 ('systemctl') (unit session-7.scope)... Feb 13 16:20:10.259528 systemd[1]: Reloading... Feb 13 16:20:10.411336 zram_generator::config[1725]: No configuration found. Feb 13 16:20:10.553440 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 16:20:10.638572 systemd[1]: Reloading finished in 378 ms. Feb 13 16:20:10.705776 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 16:20:10.705864 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 16:20:10.706501 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:20:10.709793 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 16:20:10.878666 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 16:20:10.878918 (kubelet)[1777]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 16:20:10.956319 kubelet[1777]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:20:10.956319 kubelet[1777]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 13 16:20:10.956319 kubelet[1777]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 16:20:10.957839 kubelet[1777]: I0213 16:20:10.957700 1777 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 16:20:11.462776 kubelet[1777]: I0213 16:20:11.462698 1777 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Feb 13 16:20:11.462776 kubelet[1777]: I0213 16:20:11.462741 1777 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 16:20:11.463132 kubelet[1777]: I0213 16:20:11.463099 1777 server.go:929] "Client rotation is on, will bootstrap in background" Feb 13 16:20:11.492597 kubelet[1777]: I0213 16:20:11.492513 1777 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 16:20:11.510161 kubelet[1777]: E0213 16:20:11.510021 1777 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 16:20:11.510161 kubelet[1777]: I0213 16:20:11.510064 1777 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 16:20:11.516314 kubelet[1777]: I0213 16:20:11.515795 1777 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 16:20:11.516314 kubelet[1777]: I0213 16:20:11.515966 1777 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Feb 13 16:20:11.516314 kubelet[1777]: I0213 16:20:11.516193 1777 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 16:20:11.516739 kubelet[1777]: I0213 16:20:11.516252 1777 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"146.190.141.99","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 16:20:11.517001 kubelet[1777]: I0213 16:20:11.516975 1777 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 16:20:11.517086 kubelet[1777]: I0213 16:20:11.517074 1777 container_manager_linux.go:300] "Creating device plugin manager" Feb 13 16:20:11.517390 kubelet[1777]: I0213 16:20:11.517373 1777 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:20:11.519608 kubelet[1777]: I0213 16:20:11.519572 1777 kubelet.go:408] "Attempting to sync node with API server" Feb 13 16:20:11.519769 kubelet[1777]: I0213 16:20:11.519756 1777 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 16:20:11.519968 kubelet[1777]: I0213 16:20:11.519948 1777 kubelet.go:314] "Adding apiserver pod source" Feb 13 16:20:11.520336 kubelet[1777]: I0213 16:20:11.520044 1777 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 16:20:11.526921 kubelet[1777]: E0213 16:20:11.526620 1777 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:11.526921 kubelet[1777]: E0213 16:20:11.526709 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:11.528155 kubelet[1777]: I0213 16:20:11.528109 1777 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Feb 13 16:20:11.530254 kubelet[1777]: I0213 16:20:11.530188 1777 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 16:20:11.531093 kubelet[1777]: W0213 16:20:11.531029 1777 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 16:20:11.531977 kubelet[1777]: I0213 16:20:11.531914 1777 server.go:1269] "Started kubelet" Feb 13 16:20:11.534466 kubelet[1777]: I0213 16:20:11.533502 1777 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 16:20:11.534466 kubelet[1777]: I0213 16:20:11.534399 1777 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 16:20:11.536313 kubelet[1777]: I0213 16:20:11.536185 1777 server.go:460] "Adding debug handlers to kubelet server" Feb 13 16:20:11.537878 kubelet[1777]: I0213 16:20:11.537798 1777 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 16:20:11.538368 kubelet[1777]: I0213 16:20:11.538342 1777 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 16:20:11.545003 kubelet[1777]: I0213 16:20:11.544948 1777 volume_manager.go:289] "Starting Kubelet Volume Manager" Feb 13 16:20:11.549590 kubelet[1777]: I0213 16:20:11.545895 1777 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Feb 13 16:20:11.549590 kubelet[1777]: I0213 16:20:11.547394 1777 reconciler.go:26] "Reconciler: start to sync state" Feb 13 16:20:11.549590 kubelet[1777]: E0213 16:20:11.546256 1777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"146.190.141.99\" not found" Feb 13 16:20:11.549590 kubelet[1777]: I0213 16:20:11.547787 1777 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 16:20:11.552105 kubelet[1777]: I0213 16:20:11.552033 1777 factory.go:221] Registration of the systemd container factory successfully Feb 13 16:20:11.554471 kubelet[1777]: I0213 16:20:11.552182 1777 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 16:20:11.554471 kubelet[1777]: I0213 16:20:11.553776 1777 factory.go:221] Registration of the containerd container factory successfully Feb 13 16:20:11.568724 kubelet[1777]: E0213 16:20:11.568673 1777 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"146.190.141.99\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 16:20:11.568902 kubelet[1777]: W0213 16:20:11.568839 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 13 16:20:11.568982 kubelet[1777]: E0213 16:20:11.568896 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" Feb 13 16:20:11.568982 kubelet[1777]: W0213 16:20:11.568969 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "146.190.141.99" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 16:20:11.569061 kubelet[1777]: E0213 16:20:11.568989 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"146.190.141.99\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 16:20:11.569092 kubelet[1777]: W0213 16:20:11.569072 1777 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 16:20:11.569129 kubelet[1777]: E0213 16:20:11.569090 1777 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 16:20:11.581039 kubelet[1777]: I0213 16:20:11.577517 1777 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 13 16:20:11.581039 kubelet[1777]: I0213 16:20:11.577545 1777 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 13 16:20:11.581039 kubelet[1777]: I0213 16:20:11.577575 1777 state_mem.go:36] "Initialized new in-memory state store" Feb 13 16:20:11.581593 kubelet[1777]: I0213 16:20:11.581542 1777 policy_none.go:49] "None policy: Start" Feb 13 16:20:11.586713 kubelet[1777]: I0213 16:20:11.586667 1777 memory_manager.go:170] "Starting memorymanager" policy="None" Feb 13 16:20:11.588322 kubelet[1777]: I0213 16:20:11.587855 1777 state_mem.go:35] "Initializing new in-memory state store" Feb 13 16:20:11.598044 kubelet[1777]: E0213 16:20:11.583364 1777 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.141.99.1823d0f049135c1b default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.141.99,UID:146.190.141.99,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:146.190.141.99,},FirstTimestamp:2025-02-13 16:20:11.531877403 +0000 UTC m=+0.642845064,LastTimestamp:2025-02-13 16:20:11.531877403 +0000 UTC m=+0.642845064,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.141.99,}" Feb 13 16:20:11.605673 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 16:20:11.610814 kubelet[1777]: E0213 16:20:11.610653 1777 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{146.190.141.99.1823d0f04bad2689 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:146.190.141.99,UID:146.190.141.99,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 146.190.141.99 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:146.190.141.99,},FirstTimestamp:2025-02-13 16:20:11.575510665 +0000 UTC m=+0.686478332,LastTimestamp:2025-02-13 16:20:11.575510665 +0000 UTC m=+0.686478332,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:146.190.141.99,}" Feb 13 16:20:11.626910 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 16:20:11.634364 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 16:20:11.646970 kubelet[1777]: I0213 16:20:11.646910 1777 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 16:20:11.647675 kubelet[1777]: I0213 16:20:11.647340 1777 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 16:20:11.647675 kubelet[1777]: I0213 16:20:11.647365 1777 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 16:20:11.655980 kubelet[1777]: I0213 16:20:11.652808 1777 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 16:20:11.661505 kubelet[1777]: E0213 16:20:11.661375 1777 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"146.190.141.99\" not found" Feb 13 16:20:11.669224 kubelet[1777]: I0213 16:20:11.669159 1777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 16:20:11.671644 kubelet[1777]: I0213 16:20:11.671595 1777 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 16:20:11.672343 kubelet[1777]: I0213 16:20:11.672202 1777 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 13 16:20:11.672343 kubelet[1777]: I0213 16:20:11.672251 1777 kubelet.go:2321] "Starting kubelet main sync loop" Feb 13 16:20:11.673007 kubelet[1777]: E0213 16:20:11.672466 1777 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 16:20:11.749774 kubelet[1777]: I0213 16:20:11.749303 1777 kubelet_node_status.go:72] "Attempting to register node" node="146.190.141.99" Feb 13 16:20:11.764085 kubelet[1777]: I0213 16:20:11.764029 1777 kubelet_node_status.go:75] "Successfully registered node" node="146.190.141.99" Feb 13 16:20:11.764326 kubelet[1777]: E0213 16:20:11.764306 1777 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"146.190.141.99\": node \"146.190.141.99\" not found" Feb 13 16:20:11.822078 kubelet[1777]: I0213 16:20:11.821072 1777 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 16:20:11.822250 containerd[1468]: time="2025-02-13T16:20:11.821889138Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 16:20:11.823645 kubelet[1777]: I0213 16:20:11.823014 1777 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 16:20:11.901200 kubelet[1777]: E0213 16:20:11.901142 1777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"146.190.141.99\" not found" Feb 13 16:20:11.962361 sudo[1651]: pam_unix(sudo:session): session closed for user root Feb 13 16:20:11.966172 sshd[1650]: Connection closed by 139.178.89.65 port 37970 Feb 13 16:20:11.967693 sshd-session[1648]: pam_unix(sshd:session): session closed for user core Feb 13 16:20:11.973260 systemd[1]: sshd@6-146.190.141.99:22-139.178.89.65:37970.service: Deactivated successfully. Feb 13 16:20:11.976609 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 16:20:11.977990 systemd-logind[1450]: Session 7 logged out. Waiting for processes to exit. Feb 13 16:20:11.979797 systemd-logind[1450]: Removed session 7. Feb 13 16:20:12.002153 kubelet[1777]: E0213 16:20:12.001992 1777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"146.190.141.99\" not found" Feb 13 16:20:12.103128 kubelet[1777]: E0213 16:20:12.103053 1777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"146.190.141.99\" not found" Feb 13 16:20:12.204359 kubelet[1777]: E0213 16:20:12.204252 1777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"146.190.141.99\" not found" Feb 13 16:20:12.304996 kubelet[1777]: E0213 16:20:12.304845 1777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"146.190.141.99\" not found" Feb 13 16:20:12.405834 kubelet[1777]: E0213 16:20:12.405711 1777 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"146.190.141.99\" not found" Feb 13 16:20:12.465618 kubelet[1777]: I0213 16:20:12.465465 1777 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 16:20:12.465769 kubelet[1777]: W0213 16:20:12.465711 1777 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 16:20:12.465769 kubelet[1777]: W0213 16:20:12.465750 1777 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.Node ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 16:20:12.527443 kubelet[1777]: I0213 16:20:12.527308 1777 apiserver.go:52] "Watching apiserver" Feb 13 16:20:12.527443 kubelet[1777]: E0213 16:20:12.527372 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:12.537620 kubelet[1777]: E0213 16:20:12.536807 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:12.546246 systemd[1]: Created slice kubepods-besteffort-pod5a7ad843_b6cf_4400_8a90_bf43d15dd08b.slice - libcontainer container kubepods-besteffort-pod5a7ad843_b6cf_4400_8a90_bf43d15dd08b.slice. Feb 13 16:20:12.548292 kubelet[1777]: I0213 16:20:12.548239 1777 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Feb 13 16:20:12.554081 kubelet[1777]: I0213 16:20:12.553550 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-cni-net-dir\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554081 kubelet[1777]: I0213 16:20:12.553625 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b5f959f-a7e4-4515-a03d-ec0af7f0538c-varrun\") pod \"csi-node-driver-2tkx2\" (UID: \"6b5f959f-a7e4-4515-a03d-ec0af7f0538c\") " pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:12.554081 kubelet[1777]: I0213 16:20:12.553664 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b5f959f-a7e4-4515-a03d-ec0af7f0538c-kubelet-dir\") pod \"csi-node-driver-2tkx2\" (UID: \"6b5f959f-a7e4-4515-a03d-ec0af7f0538c\") " pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:12.554081 kubelet[1777]: I0213 16:20:12.553691 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b5f959f-a7e4-4515-a03d-ec0af7f0538c-socket-dir\") pod \"csi-node-driver-2tkx2\" (UID: \"6b5f959f-a7e4-4515-a03d-ec0af7f0538c\") " pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:12.554081 kubelet[1777]: I0213 16:20:12.553744 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5a7ad843-b6cf-4400-8a90-bf43d15dd08b-kube-proxy\") pod \"kube-proxy-92nl9\" (UID: \"5a7ad843-b6cf-4400-8a90-bf43d15dd08b\") " pod="kube-system/kube-proxy-92nl9" Feb 13 16:20:12.554407 kubelet[1777]: I0213 16:20:12.553774 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/95d6ff56-7378-4102-8c9b-772a90b088a4-tigera-ca-bundle\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554407 kubelet[1777]: I0213 16:20:12.553804 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8nlvr\" (UniqueName: \"kubernetes.io/projected/95d6ff56-7378-4102-8c9b-772a90b088a4-kube-api-access-8nlvr\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554407 kubelet[1777]: I0213 16:20:12.553832 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bpxc7\" (UniqueName: \"kubernetes.io/projected/6b5f959f-a7e4-4515-a03d-ec0af7f0538c-kube-api-access-bpxc7\") pod \"csi-node-driver-2tkx2\" (UID: \"6b5f959f-a7e4-4515-a03d-ec0af7f0538c\") " pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:12.554407 kubelet[1777]: I0213 16:20:12.553864 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5a7ad843-b6cf-4400-8a90-bf43d15dd08b-lib-modules\") pod \"kube-proxy-92nl9\" (UID: \"5a7ad843-b6cf-4400-8a90-bf43d15dd08b\") " pod="kube-system/kube-proxy-92nl9" Feb 13 16:20:12.554407 kubelet[1777]: I0213 16:20:12.553900 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-cni-log-dir\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554733 kubelet[1777]: I0213 16:20:12.553926 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-var-run-calico\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554733 kubelet[1777]: I0213 16:20:12.553954 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-cni-bin-dir\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554733 kubelet[1777]: I0213 16:20:12.553981 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-flexvol-driver-host\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554733 kubelet[1777]: I0213 16:20:12.554027 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b5f959f-a7e4-4515-a03d-ec0af7f0538c-registration-dir\") pod \"csi-node-driver-2tkx2\" (UID: \"6b5f959f-a7e4-4515-a03d-ec0af7f0538c\") " pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:12.554733 kubelet[1777]: I0213 16:20:12.554065 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5a7ad843-b6cf-4400-8a90-bf43d15dd08b-xtables-lock\") pod \"kube-proxy-92nl9\" (UID: \"5a7ad843-b6cf-4400-8a90-bf43d15dd08b\") " pod="kube-system/kube-proxy-92nl9" Feb 13 16:20:12.554927 kubelet[1777]: I0213 16:20:12.554094 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdlmg\" (UniqueName: \"kubernetes.io/projected/5a7ad843-b6cf-4400-8a90-bf43d15dd08b-kube-api-access-sdlmg\") pod \"kube-proxy-92nl9\" (UID: \"5a7ad843-b6cf-4400-8a90-bf43d15dd08b\") " pod="kube-system/kube-proxy-92nl9" Feb 13 16:20:12.554927 kubelet[1777]: I0213 16:20:12.554120 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-lib-modules\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554927 kubelet[1777]: I0213 16:20:12.554146 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-policysync\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554927 kubelet[1777]: I0213 16:20:12.554195 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/95d6ff56-7378-4102-8c9b-772a90b088a4-node-certs\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.554927 kubelet[1777]: I0213 16:20:12.554222 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-var-lib-calico\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.556322 kubelet[1777]: I0213 16:20:12.554248 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/95d6ff56-7378-4102-8c9b-772a90b088a4-xtables-lock\") pod \"calico-node-q2p5s\" (UID: \"95d6ff56-7378-4102-8c9b-772a90b088a4\") " pod="calico-system/calico-node-q2p5s" Feb 13 16:20:12.561944 systemd[1]: Created slice kubepods-besteffort-pod95d6ff56_7378_4102_8c9b_772a90b088a4.slice - libcontainer container kubepods-besteffort-pod95d6ff56_7378_4102_8c9b_772a90b088a4.slice. Feb 13 16:20:12.662448 kubelet[1777]: E0213 16:20:12.662402 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.662448 kubelet[1777]: W0213 16:20:12.662441 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.662706 kubelet[1777]: E0213 16:20:12.662504 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.666642 kubelet[1777]: E0213 16:20:12.666608 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.666642 kubelet[1777]: W0213 16:20:12.666634 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.666843 kubelet[1777]: E0213 16:20:12.666750 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.667085 kubelet[1777]: E0213 16:20:12.667067 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.667134 kubelet[1777]: W0213 16:20:12.667084 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.667196 kubelet[1777]: E0213 16:20:12.667179 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.667506 kubelet[1777]: E0213 16:20:12.667484 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.667506 kubelet[1777]: W0213 16:20:12.667501 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.667738 kubelet[1777]: E0213 16:20:12.667645 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.667816 kubelet[1777]: E0213 16:20:12.667753 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.667816 kubelet[1777]: W0213 16:20:12.667766 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.667816 kubelet[1777]: E0213 16:20:12.667784 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.668094 kubelet[1777]: E0213 16:20:12.668079 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.668140 kubelet[1777]: W0213 16:20:12.668095 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.668140 kubelet[1777]: E0213 16:20:12.668111 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.712534 kubelet[1777]: E0213 16:20:12.712419 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.712534 kubelet[1777]: W0213 16:20:12.712445 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.712534 kubelet[1777]: E0213 16:20:12.712469 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.728533 kubelet[1777]: E0213 16:20:12.728318 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.728533 kubelet[1777]: W0213 16:20:12.728353 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.728533 kubelet[1777]: E0213 16:20:12.728402 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.729247 kubelet[1777]: E0213 16:20:12.729113 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:12.729247 kubelet[1777]: W0213 16:20:12.729146 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:12.729247 kubelet[1777]: E0213 16:20:12.729167 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:12.859210 kubelet[1777]: E0213 16:20:12.858346 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:12.860717 containerd[1468]: time="2025-02-13T16:20:12.859917512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92nl9,Uid:5a7ad843-b6cf-4400-8a90-bf43d15dd08b,Namespace:kube-system,Attempt:0,}" Feb 13 16:20:12.867449 kubelet[1777]: E0213 16:20:12.866716 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:12.868147 containerd[1468]: time="2025-02-13T16:20:12.868098856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2p5s,Uid:95d6ff56-7378-4102-8c9b-772a90b088a4,Namespace:calico-system,Attempt:0,}" Feb 13 16:20:13.419958 containerd[1468]: time="2025-02-13T16:20:13.419868812Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:20:13.421310 containerd[1468]: time="2025-02-13T16:20:13.421246953Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 16:20:13.423101 containerd[1468]: time="2025-02-13T16:20:13.422902800Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:20:13.425065 containerd[1468]: time="2025-02-13T16:20:13.424974182Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:20:13.425065 containerd[1468]: time="2025-02-13T16:20:13.425036010Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 16:20:13.430762 containerd[1468]: time="2025-02-13T16:20:13.429081763Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 16:20:13.432100 containerd[1468]: time="2025-02-13T16:20:13.432056527Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 571.961523ms" Feb 13 16:20:13.434738 containerd[1468]: time="2025-02-13T16:20:13.434675872Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 565.716931ms" Feb 13 16:20:13.539706 kubelet[1777]: E0213 16:20:13.539658 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:13.569850 containerd[1468]: time="2025-02-13T16:20:13.569716217Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:20:13.571011 containerd[1468]: time="2025-02-13T16:20:13.570928157Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:20:13.571131 containerd[1468]: time="2025-02-13T16:20:13.571026165Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:13.572151 containerd[1468]: time="2025-02-13T16:20:13.572056104Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:13.574896 containerd[1468]: time="2025-02-13T16:20:13.574571350Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:20:13.574896 containerd[1468]: time="2025-02-13T16:20:13.574635203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:20:13.574896 containerd[1468]: time="2025-02-13T16:20:13.574657985Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:13.576593 containerd[1468]: time="2025-02-13T16:20:13.576518693Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:13.658539 systemd[1]: Started cri-containerd-2fda0acaa01c908128d89f28207ff12c3343352ee0df6023912fd596b062b46d.scope - libcontainer container 2fda0acaa01c908128d89f28207ff12c3343352ee0df6023912fd596b062b46d. Feb 13 16:20:13.660458 systemd[1]: Started cri-containerd-e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286.scope - libcontainer container e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286. Feb 13 16:20:13.670107 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3130444181.mount: Deactivated successfully. Feb 13 16:20:13.715398 containerd[1468]: time="2025-02-13T16:20:13.715359736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-92nl9,Uid:5a7ad843-b6cf-4400-8a90-bf43d15dd08b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2fda0acaa01c908128d89f28207ff12c3343352ee0df6023912fd596b062b46d\"" Feb 13 16:20:13.719527 kubelet[1777]: E0213 16:20:13.719387 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:13.721660 containerd[1468]: time="2025-02-13T16:20:13.721606539Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\"" Feb 13 16:20:13.731203 containerd[1468]: time="2025-02-13T16:20:13.731154178Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-q2p5s,Uid:95d6ff56-7378-4102-8c9b-772a90b088a4,Namespace:calico-system,Attempt:0,} returns sandbox id \"e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286\"" Feb 13 16:20:13.733416 kubelet[1777]: E0213 16:20:13.733181 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:14.541680 kubelet[1777]: E0213 16:20:14.541383 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:14.674073 kubelet[1777]: E0213 16:20:14.673464 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:14.841112 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2453754276.mount: Deactivated successfully. Feb 13 16:20:15.541860 kubelet[1777]: E0213 16:20:15.541764 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:15.576252 containerd[1468]: time="2025-02-13T16:20:15.574783707Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:15.576252 containerd[1468]: time="2025-02-13T16:20:15.575851640Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.6: active requests=0, bytes read=30229108" Feb 13 16:20:15.576252 containerd[1468]: time="2025-02-13T16:20:15.576165388Z" level=info msg="ImageCreate event name:\"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:15.578498 containerd[1468]: time="2025-02-13T16:20:15.578441612Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:15.579676 containerd[1468]: time="2025-02-13T16:20:15.579603254Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.6\" with image id \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\", repo tag \"registry.k8s.io/kube-proxy:v1.31.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:e72a4bc769f10b56ffdfe2cdb21d84d49d9bc194b3658648207998a5bd924b72\", size \"30228127\" in 1.85793552s" Feb 13 16:20:15.579900 containerd[1468]: time="2025-02-13T16:20:15.579857474Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.6\" returns image reference \"sha256:d2448f015605e48efb6b06ceaba0cb6d48bfd82e5d30ba357a9bd78c8566348a\"" Feb 13 16:20:15.582231 containerd[1468]: time="2025-02-13T16:20:15.582133025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 16:20:15.584058 containerd[1468]: time="2025-02-13T16:20:15.584002091Z" level=info msg="CreateContainer within sandbox \"2fda0acaa01c908128d89f28207ff12c3343352ee0df6023912fd596b062b46d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 16:20:15.612806 containerd[1468]: time="2025-02-13T16:20:15.612726906Z" level=info msg="CreateContainer within sandbox \"2fda0acaa01c908128d89f28207ff12c3343352ee0df6023912fd596b062b46d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ead7ad3a6ee9e60c419c5a7563693a402d06d477e374e90dc77ea95cb496fa29\"" Feb 13 16:20:15.614329 containerd[1468]: time="2025-02-13T16:20:15.614176191Z" level=info msg="StartContainer for \"ead7ad3a6ee9e60c419c5a7563693a402d06d477e374e90dc77ea95cb496fa29\"" Feb 13 16:20:15.671970 systemd[1]: Started cri-containerd-ead7ad3a6ee9e60c419c5a7563693a402d06d477e374e90dc77ea95cb496fa29.scope - libcontainer container ead7ad3a6ee9e60c419c5a7563693a402d06d477e374e90dc77ea95cb496fa29. Feb 13 16:20:15.732843 containerd[1468]: time="2025-02-13T16:20:15.732684855Z" level=info msg="StartContainer for \"ead7ad3a6ee9e60c419c5a7563693a402d06d477e374e90dc77ea95cb496fa29\" returns successfully" Feb 13 16:20:16.542645 kubelet[1777]: E0213 16:20:16.542560 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:16.672707 kubelet[1777]: E0213 16:20:16.672583 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:16.694300 kubelet[1777]: E0213 16:20:16.694242 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:16.780610 kubelet[1777]: E0213 16:20:16.780380 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.780610 kubelet[1777]: W0213 16:20:16.780424 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.780610 kubelet[1777]: E0213 16:20:16.780459 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.781124 kubelet[1777]: E0213 16:20:16.780756 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.781124 kubelet[1777]: W0213 16:20:16.780773 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.781124 kubelet[1777]: E0213 16:20:16.780793 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.781594 kubelet[1777]: E0213 16:20:16.781422 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.781594 kubelet[1777]: W0213 16:20:16.781452 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.781594 kubelet[1777]: E0213 16:20:16.781469 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.781862 kubelet[1777]: E0213 16:20:16.781845 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.781954 kubelet[1777]: W0213 16:20:16.781940 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.782176 kubelet[1777]: E0213 16:20:16.782029 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.782381 kubelet[1777]: E0213 16:20:16.782364 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.782603 kubelet[1777]: W0213 16:20:16.782456 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.782603 kubelet[1777]: E0213 16:20:16.782477 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.782792 kubelet[1777]: E0213 16:20:16.782778 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.782983 kubelet[1777]: W0213 16:20:16.782862 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.782983 kubelet[1777]: E0213 16:20:16.782881 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.783149 kubelet[1777]: E0213 16:20:16.783136 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.783351 kubelet[1777]: W0213 16:20:16.783216 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.783476 kubelet[1777]: E0213 16:20:16.783454 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.783865 kubelet[1777]: E0213 16:20:16.783847 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.783975 kubelet[1777]: W0213 16:20:16.783961 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.784162 kubelet[1777]: E0213 16:20:16.784037 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.784593 kubelet[1777]: E0213 16:20:16.784450 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.784593 kubelet[1777]: W0213 16:20:16.784466 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.784593 kubelet[1777]: E0213 16:20:16.784482 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.784831 kubelet[1777]: E0213 16:20:16.784818 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.784907 kubelet[1777]: W0213 16:20:16.784893 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.784978 kubelet[1777]: E0213 16:20:16.784967 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.785510 kubelet[1777]: E0213 16:20:16.785490 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.785760 kubelet[1777]: W0213 16:20:16.785618 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.785760 kubelet[1777]: E0213 16:20:16.785640 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.785952 kubelet[1777]: E0213 16:20:16.785938 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.786263 kubelet[1777]: W0213 16:20:16.786092 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.786263 kubelet[1777]: E0213 16:20:16.786117 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.786630 kubelet[1777]: E0213 16:20:16.786490 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.786630 kubelet[1777]: W0213 16:20:16.786506 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.786630 kubelet[1777]: E0213 16:20:16.786520 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.786859 kubelet[1777]: E0213 16:20:16.786834 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.786959 kubelet[1777]: W0213 16:20:16.786943 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.787040 kubelet[1777]: E0213 16:20:16.787027 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.787561 kubelet[1777]: E0213 16:20:16.787539 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.787821 kubelet[1777]: W0213 16:20:16.787662 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.787821 kubelet[1777]: E0213 16:20:16.787696 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.788019 kubelet[1777]: E0213 16:20:16.788003 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.788112 kubelet[1777]: W0213 16:20:16.788098 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.788198 kubelet[1777]: E0213 16:20:16.788184 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.788691 kubelet[1777]: E0213 16:20:16.788670 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.788912 kubelet[1777]: W0213 16:20:16.788779 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.788912 kubelet[1777]: E0213 16:20:16.788799 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.789090 kubelet[1777]: E0213 16:20:16.789076 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.789197 kubelet[1777]: W0213 16:20:16.789180 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.789320 kubelet[1777]: E0213 16:20:16.789260 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.789836 kubelet[1777]: E0213 16:20:16.789679 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.789836 kubelet[1777]: W0213 16:20:16.789701 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.789836 kubelet[1777]: E0213 16:20:16.789718 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.790070 kubelet[1777]: E0213 16:20:16.790051 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.790167 kubelet[1777]: W0213 16:20:16.790151 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.790240 kubelet[1777]: E0213 16:20:16.790227 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.791590 kubelet[1777]: E0213 16:20:16.791558 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.791590 kubelet[1777]: W0213 16:20:16.791588 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.791741 kubelet[1777]: E0213 16:20:16.791615 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.792073 kubelet[1777]: E0213 16:20:16.792040 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.792073 kubelet[1777]: W0213 16:20:16.792061 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.792073 kubelet[1777]: E0213 16:20:16.792088 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.792492 kubelet[1777]: E0213 16:20:16.792476 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.792543 kubelet[1777]: W0213 16:20:16.792497 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.792584 kubelet[1777]: E0213 16:20:16.792541 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.793068 kubelet[1777]: E0213 16:20:16.792955 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.793068 kubelet[1777]: W0213 16:20:16.792978 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.793068 kubelet[1777]: E0213 16:20:16.793015 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.795306 kubelet[1777]: E0213 16:20:16.793436 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.795306 kubelet[1777]: W0213 16:20:16.793459 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.795306 kubelet[1777]: E0213 16:20:16.793562 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.795306 kubelet[1777]: E0213 16:20:16.793865 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.795306 kubelet[1777]: W0213 16:20:16.793879 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.795306 kubelet[1777]: E0213 16:20:16.793918 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.795306 kubelet[1777]: E0213 16:20:16.794237 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.795306 kubelet[1777]: W0213 16:20:16.794254 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.795306 kubelet[1777]: E0213 16:20:16.794308 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.795306 kubelet[1777]: E0213 16:20:16.794619 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.796024 kubelet[1777]: W0213 16:20:16.794632 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.796024 kubelet[1777]: E0213 16:20:16.794716 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.796024 kubelet[1777]: E0213 16:20:16.795131 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.796024 kubelet[1777]: W0213 16:20:16.795145 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.796024 kubelet[1777]: E0213 16:20:16.795243 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.796253 kubelet[1777]: E0213 16:20:16.796220 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.796322 kubelet[1777]: W0213 16:20:16.796267 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.796363 kubelet[1777]: E0213 16:20:16.796334 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.797163 kubelet[1777]: E0213 16:20:16.797127 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.797163 kubelet[1777]: W0213 16:20:16.797156 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.797460 kubelet[1777]: E0213 16:20:16.797214 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:16.797609 kubelet[1777]: E0213 16:20:16.797584 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:16.797609 kubelet[1777]: W0213 16:20:16.797607 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:16.797703 kubelet[1777]: E0213 16:20:16.797632 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.328470 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262860982.mount: Deactivated successfully. Feb 13 16:20:17.543723 kubelet[1777]: E0213 16:20:17.543596 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:17.697033 kubelet[1777]: E0213 16:20:17.696675 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:17.699856 kubelet[1777]: E0213 16:20:17.699814 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.700170 kubelet[1777]: W0213 16:20:17.700137 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.700358 kubelet[1777]: E0213 16:20:17.700333 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.702307 kubelet[1777]: E0213 16:20:17.702090 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.702307 kubelet[1777]: W0213 16:20:17.702121 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.702307 kubelet[1777]: E0213 16:20:17.702147 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.702596 kubelet[1777]: E0213 16:20:17.702577 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.702813 kubelet[1777]: W0213 16:20:17.702647 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.702813 kubelet[1777]: E0213 16:20:17.702667 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.703180 kubelet[1777]: E0213 16:20:17.702992 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.703180 kubelet[1777]: W0213 16:20:17.703006 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.703180 kubelet[1777]: E0213 16:20:17.703032 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.703644 kubelet[1777]: E0213 16:20:17.703455 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.703644 kubelet[1777]: W0213 16:20:17.703606 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.704015 kubelet[1777]: E0213 16:20:17.703837 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.704509 kubelet[1777]: E0213 16:20:17.704390 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.704509 kubelet[1777]: W0213 16:20:17.704405 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.704509 kubelet[1777]: E0213 16:20:17.704421 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.705573 kubelet[1777]: E0213 16:20:17.705244 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.705573 kubelet[1777]: W0213 16:20:17.705287 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.705573 kubelet[1777]: E0213 16:20:17.705333 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.706067 kubelet[1777]: E0213 16:20:17.705831 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.706067 kubelet[1777]: W0213 16:20:17.705858 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.706067 kubelet[1777]: E0213 16:20:17.705875 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.706571 kubelet[1777]: E0213 16:20:17.706414 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.706571 kubelet[1777]: W0213 16:20:17.706498 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.706571 kubelet[1777]: E0213 16:20:17.706512 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.707586 kubelet[1777]: E0213 16:20:17.707407 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.707586 kubelet[1777]: W0213 16:20:17.707426 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.707586 kubelet[1777]: E0213 16:20:17.707443 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.708093 kubelet[1777]: E0213 16:20:17.707797 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.708093 kubelet[1777]: W0213 16:20:17.707811 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.708093 kubelet[1777]: E0213 16:20:17.707824 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.708607 kubelet[1777]: E0213 16:20:17.708337 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.708607 kubelet[1777]: W0213 16:20:17.708355 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.708607 kubelet[1777]: E0213 16:20:17.708372 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.709193 kubelet[1777]: E0213 16:20:17.709026 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.709193 kubelet[1777]: W0213 16:20:17.709046 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.709193 kubelet[1777]: E0213 16:20:17.709060 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.710022 kubelet[1777]: E0213 16:20:17.709558 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.710022 kubelet[1777]: W0213 16:20:17.709578 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.710022 kubelet[1777]: E0213 16:20:17.709595 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.710672 kubelet[1777]: E0213 16:20:17.710514 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.710672 kubelet[1777]: W0213 16:20:17.710535 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.710672 kubelet[1777]: E0213 16:20:17.710554 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.711389 kubelet[1777]: E0213 16:20:17.711163 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.711389 kubelet[1777]: W0213 16:20:17.711198 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.711389 kubelet[1777]: E0213 16:20:17.711215 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.712425 kubelet[1777]: E0213 16:20:17.712202 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.712425 kubelet[1777]: W0213 16:20:17.712223 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.712425 kubelet[1777]: E0213 16:20:17.712242 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.712882 kubelet[1777]: E0213 16:20:17.712697 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.712882 kubelet[1777]: W0213 16:20:17.712720 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.712882 kubelet[1777]: E0213 16:20:17.712734 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.713872 kubelet[1777]: E0213 16:20:17.713504 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.713872 kubelet[1777]: W0213 16:20:17.713524 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.713872 kubelet[1777]: E0213 16:20:17.713541 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.714364 kubelet[1777]: E0213 16:20:17.714340 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.714593 kubelet[1777]: W0213 16:20:17.714481 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.714593 kubelet[1777]: E0213 16:20:17.714519 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.800501 kubelet[1777]: E0213 16:20:17.800421 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.800501 kubelet[1777]: W0213 16:20:17.800483 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.800767 kubelet[1777]: E0213 16:20:17.800518 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.801035 kubelet[1777]: E0213 16:20:17.800943 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.801035 kubelet[1777]: W0213 16:20:17.800967 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.801035 kubelet[1777]: E0213 16:20:17.800992 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.801378 kubelet[1777]: E0213 16:20:17.801357 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.801378 kubelet[1777]: W0213 16:20:17.801378 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.801461 kubelet[1777]: E0213 16:20:17.801402 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.801896 kubelet[1777]: E0213 16:20:17.801683 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.801896 kubelet[1777]: W0213 16:20:17.801700 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.801896 kubelet[1777]: E0213 16:20:17.801721 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.802306 kubelet[1777]: E0213 16:20:17.801974 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.802306 kubelet[1777]: W0213 16:20:17.801987 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.802306 kubelet[1777]: E0213 16:20:17.802005 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.802400 kubelet[1777]: E0213 16:20:17.802364 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.802400 kubelet[1777]: W0213 16:20:17.802379 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.802400 kubelet[1777]: E0213 16:20:17.802396 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.803191 kubelet[1777]: E0213 16:20:17.802998 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.803191 kubelet[1777]: W0213 16:20:17.803018 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.803191 kubelet[1777]: E0213 16:20:17.803041 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.803788 kubelet[1777]: E0213 16:20:17.803410 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.803788 kubelet[1777]: W0213 16:20:17.803425 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.803788 kubelet[1777]: E0213 16:20:17.803448 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.803788 kubelet[1777]: E0213 16:20:17.803704 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.804348 kubelet[1777]: W0213 16:20:17.804142 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.804348 kubelet[1777]: E0213 16:20:17.804225 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.804696 kubelet[1777]: E0213 16:20:17.804530 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.804696 kubelet[1777]: W0213 16:20:17.804545 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.804696 kubelet[1777]: E0213 16:20:17.804581 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.805562 kubelet[1777]: E0213 16:20:17.805310 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.805562 kubelet[1777]: W0213 16:20:17.805340 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.805562 kubelet[1777]: E0213 16:20:17.805362 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.806259 kubelet[1777]: E0213 16:20:17.806115 1777 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 16:20:17.806259 kubelet[1777]: W0213 16:20:17.806229 1777 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 16:20:17.806259 kubelet[1777]: E0213 16:20:17.806250 1777 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 16:20:17.809827 containerd[1468]: time="2025-02-13T16:20:17.809718270Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:17.812515 containerd[1468]: time="2025-02-13T16:20:17.812429231Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 16:20:17.813418 containerd[1468]: time="2025-02-13T16:20:17.813360495Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:17.819646 containerd[1468]: time="2025-02-13T16:20:17.819140909Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:17.820768 containerd[1468]: time="2025-02-13T16:20:17.820695622Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.238510229s" Feb 13 16:20:17.820768 containerd[1468]: time="2025-02-13T16:20:17.820763915Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 16:20:17.826528 containerd[1468]: time="2025-02-13T16:20:17.826467387Z" level=info msg="CreateContainer within sandbox \"e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 16:20:17.848870 containerd[1468]: time="2025-02-13T16:20:17.848679220Z" level=info msg="CreateContainer within sandbox \"e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00\"" Feb 13 16:20:17.851307 containerd[1468]: time="2025-02-13T16:20:17.849791122Z" level=info msg="StartContainer for \"9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00\"" Feb 13 16:20:17.903123 systemd[1]: Started cri-containerd-9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00.scope - libcontainer container 9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00. Feb 13 16:20:17.959808 containerd[1468]: time="2025-02-13T16:20:17.959647834Z" level=info msg="StartContainer for \"9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00\" returns successfully" Feb 13 16:20:17.991017 systemd[1]: cri-containerd-9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00.scope: Deactivated successfully. Feb 13 16:20:18.118924 containerd[1468]: time="2025-02-13T16:20:18.118617243Z" level=info msg="shim disconnected" id=9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00 namespace=k8s.io Feb 13 16:20:18.118924 containerd[1468]: time="2025-02-13T16:20:18.118695551Z" level=warning msg="cleaning up after shim disconnected" id=9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00 namespace=k8s.io Feb 13 16:20:18.118924 containerd[1468]: time="2025-02-13T16:20:18.118709187Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:20:18.257663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9d9fd8de18797d9917f9a94a584379ee90c2832c7cf8b65abd835a7d33b21e00-rootfs.mount: Deactivated successfully. Feb 13 16:20:18.544212 kubelet[1777]: E0213 16:20:18.543953 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:18.672756 kubelet[1777]: E0213 16:20:18.672659 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:18.701708 kubelet[1777]: E0213 16:20:18.701655 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:18.702932 containerd[1468]: time="2025-02-13T16:20:18.702860737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 16:20:18.860414 kubelet[1777]: I0213 16:20:18.860158 1777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-92nl9" podStartSLOduration=4.999819482 podStartE2EDuration="6.86012923s" podCreationTimestamp="2025-02-13 16:20:12 +0000 UTC" firstStartedPulling="2025-02-13 16:20:13.720833473 +0000 UTC m=+2.831801126" lastFinishedPulling="2025-02-13 16:20:15.581143232 +0000 UTC m=+4.692110874" observedRunningTime="2025-02-13 16:20:16.725027917 +0000 UTC m=+5.835995584" watchObservedRunningTime="2025-02-13 16:20:18.86012923 +0000 UTC m=+7.971096892" Feb 13 16:20:19.049782 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 16:20:19.544869 kubelet[1777]: E0213 16:20:19.544798 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:20.545122 kubelet[1777]: E0213 16:20:20.545023 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:20.673451 kubelet[1777]: E0213 16:20:20.672953 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:21.545419 kubelet[1777]: E0213 16:20:21.545253 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:22.122757 systemd-resolved[1321]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 16:20:22.547106 kubelet[1777]: E0213 16:20:22.546572 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:22.673893 kubelet[1777]: E0213 16:20:22.673451 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:23.547943 kubelet[1777]: E0213 16:20:23.547717 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:23.591956 containerd[1468]: time="2025-02-13T16:20:23.591862924Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:23.593080 containerd[1468]: time="2025-02-13T16:20:23.593022774Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 16:20:23.593692 containerd[1468]: time="2025-02-13T16:20:23.593373194Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:23.596547 containerd[1468]: time="2025-02-13T16:20:23.596494952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:23.597671 containerd[1468]: time="2025-02-13T16:20:23.597628556Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.894696961s" Feb 13 16:20:23.597671 containerd[1468]: time="2025-02-13T16:20:23.597663956Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 16:20:23.601593 containerd[1468]: time="2025-02-13T16:20:23.601549867Z" level=info msg="CreateContainer within sandbox \"e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 16:20:23.622651 containerd[1468]: time="2025-02-13T16:20:23.622484109Z" level=info msg="CreateContainer within sandbox \"e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456\"" Feb 13 16:20:23.625044 containerd[1468]: time="2025-02-13T16:20:23.623394001Z" level=info msg="StartContainer for \"0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456\"" Feb 13 16:20:23.675556 systemd[1]: Started cri-containerd-0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456.scope - libcontainer container 0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456. Feb 13 16:20:23.727943 containerd[1468]: time="2025-02-13T16:20:23.727522190Z" level=info msg="StartContainer for \"0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456\" returns successfully" Feb 13 16:20:24.436488 systemd[1]: cri-containerd-0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456.scope: Deactivated successfully. Feb 13 16:20:24.465905 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456-rootfs.mount: Deactivated successfully. Feb 13 16:20:24.526988 containerd[1468]: time="2025-02-13T16:20:24.526614752Z" level=info msg="shim disconnected" id=0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456 namespace=k8s.io Feb 13 16:20:24.526988 containerd[1468]: time="2025-02-13T16:20:24.526686109Z" level=warning msg="cleaning up after shim disconnected" id=0bf074aa88991f0d4a8b4c24682cade735a3076815ca459fcf6501e0425da456 namespace=k8s.io Feb 13 16:20:24.526988 containerd[1468]: time="2025-02-13T16:20:24.526695108Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 16:20:24.530375 kubelet[1777]: I0213 16:20:24.529588 1777 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Feb 13 16:20:24.548914 kubelet[1777]: E0213 16:20:24.548855 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:24.680838 systemd[1]: Created slice kubepods-besteffort-pod6b5f959f_a7e4_4515_a03d_ec0af7f0538c.slice - libcontainer container kubepods-besteffort-pod6b5f959f_a7e4_4515_a03d_ec0af7f0538c.slice. Feb 13 16:20:24.684849 containerd[1468]: time="2025-02-13T16:20:24.684330495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:0,}" Feb 13 16:20:24.728636 kubelet[1777]: E0213 16:20:24.727049 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:24.731137 containerd[1468]: time="2025-02-13T16:20:24.730815142Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 16:20:24.767033 containerd[1468]: time="2025-02-13T16:20:24.766973346Z" level=error msg="Failed to destroy network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:24.769001 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff-shm.mount: Deactivated successfully. Feb 13 16:20:24.769804 containerd[1468]: time="2025-02-13T16:20:24.769751603Z" level=error msg="encountered an error cleaning up failed sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:24.770194 containerd[1468]: time="2025-02-13T16:20:24.769844049Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:24.770237 kubelet[1777]: E0213 16:20:24.770062 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:24.770237 kubelet[1777]: E0213 16:20:24.770136 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:24.770237 kubelet[1777]: E0213 16:20:24.770157 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:24.770373 kubelet[1777]: E0213 16:20:24.770209 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:25.389190 systemd[1]: Created slice kubepods-besteffort-pod71a71e38_f3cb_4eae_b44c_ca79644fd5ec.slice - libcontainer container kubepods-besteffort-pod71a71e38_f3cb_4eae_b44c_ca79644fd5ec.slice. Feb 13 16:20:25.549603 kubelet[1777]: E0213 16:20:25.549490 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:25.555563 kubelet[1777]: I0213 16:20:25.555500 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rt8lk\" (UniqueName: \"kubernetes.io/projected/71a71e38-f3cb-4eae-b44c-ca79644fd5ec-kube-api-access-rt8lk\") pod \"nginx-deployment-8587fbcb89-ffj9b\" (UID: \"71a71e38-f3cb-4eae-b44c-ca79644fd5ec\") " pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:25.692855 containerd[1468]: time="2025-02-13T16:20:25.692723519Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:0,}" Feb 13 16:20:25.731544 kubelet[1777]: I0213 16:20:25.730694 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff" Feb 13 16:20:25.732482 containerd[1468]: time="2025-02-13T16:20:25.731994500Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:20:25.732482 containerd[1468]: time="2025-02-13T16:20:25.732303782Z" level=info msg="Ensure that sandbox 613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff in task-service has been cleanup successfully" Feb 13 16:20:25.734601 containerd[1468]: time="2025-02-13T16:20:25.734564959Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:20:25.734723 containerd[1468]: time="2025-02-13T16:20:25.734711092Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:20:25.735371 systemd[1]: run-netns-cni\x2d0810a02b\x2ddf30\x2dd625\x2d37c4\x2d27cb5003dc45.mount: Deactivated successfully. Feb 13 16:20:25.737736 containerd[1468]: time="2025-02-13T16:20:25.737695447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:1,}" Feb 13 16:20:25.807892 containerd[1468]: time="2025-02-13T16:20:25.807847393Z" level=error msg="Failed to destroy network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:25.808417 containerd[1468]: time="2025-02-13T16:20:25.808362694Z" level=error msg="encountered an error cleaning up failed sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:25.808730 containerd[1468]: time="2025-02-13T16:20:25.808586768Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:25.809043 kubelet[1777]: E0213 16:20:25.809004 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:25.809492 kubelet[1777]: E0213 16:20:25.809340 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:25.809492 kubelet[1777]: E0213 16:20:25.809373 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:25.809492 kubelet[1777]: E0213 16:20:25.809442 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ffj9b" podUID="71a71e38-f3cb-4eae-b44c-ca79644fd5ec" Feb 13 16:20:25.836852 containerd[1468]: time="2025-02-13T16:20:25.836781480Z" level=error msg="Failed to destroy network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:25.837321 containerd[1468]: time="2025-02-13T16:20:25.837288721Z" level=error msg="encountered an error cleaning up failed sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:25.837395 containerd[1468]: time="2025-02-13T16:20:25.837360146Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:25.838117 kubelet[1777]: E0213 16:20:25.837618 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:25.838117 kubelet[1777]: E0213 16:20:25.837702 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:25.838117 kubelet[1777]: E0213 16:20:25.837732 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:25.838295 kubelet[1777]: E0213 16:20:25.837809 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:26.550010 kubelet[1777]: E0213 16:20:26.549933 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:26.702193 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d-shm.mount: Deactivated successfully. Feb 13 16:20:26.702418 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc-shm.mount: Deactivated successfully. Feb 13 16:20:26.734335 kubelet[1777]: I0213 16:20:26.734307 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d" Feb 13 16:20:26.735495 containerd[1468]: time="2025-02-13T16:20:26.735448451Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:20:26.736166 containerd[1468]: time="2025-02-13T16:20:26.736131785Z" level=info msg="Ensure that sandbox 2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d in task-service has been cleanup successfully" Feb 13 16:20:26.738730 containerd[1468]: time="2025-02-13T16:20:26.738401681Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:20:26.738730 containerd[1468]: time="2025-02-13T16:20:26.738437633Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:20:26.739678 systemd[1]: run-netns-cni\x2d870a2a99\x2da974\x2dea13\x2db748\x2d14f17b297de4.mount: Deactivated successfully. Feb 13 16:20:26.742977 containerd[1468]: time="2025-02-13T16:20:26.741729024Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:20:26.742977 containerd[1468]: time="2025-02-13T16:20:26.741834545Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:20:26.742977 containerd[1468]: time="2025-02-13T16:20:26.741844557Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:20:26.743710 containerd[1468]: time="2025-02-13T16:20:26.743680127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:2,}" Feb 13 16:20:26.752672 kubelet[1777]: I0213 16:20:26.751979 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc" Feb 13 16:20:26.753758 containerd[1468]: time="2025-02-13T16:20:26.753613733Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:20:26.754403 containerd[1468]: time="2025-02-13T16:20:26.754352458Z" level=info msg="Ensure that sandbox 3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc in task-service has been cleanup successfully" Feb 13 16:20:26.754818 containerd[1468]: time="2025-02-13T16:20:26.754784949Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:20:26.755629 containerd[1468]: time="2025-02-13T16:20:26.755561148Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:20:26.758085 systemd[1]: run-netns-cni\x2d9735a3b1\x2d815c\x2df1b0\x2dfb1a\x2d4c97ed3dc71d.mount: Deactivated successfully. Feb 13 16:20:26.762009 containerd[1468]: time="2025-02-13T16:20:26.761399554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:1,}" Feb 13 16:20:26.928058 containerd[1468]: time="2025-02-13T16:20:26.927867648Z" level=error msg="Failed to destroy network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:26.930599 containerd[1468]: time="2025-02-13T16:20:26.928403977Z" level=error msg="encountered an error cleaning up failed sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:26.930599 containerd[1468]: time="2025-02-13T16:20:26.928498225Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:26.930599 containerd[1468]: time="2025-02-13T16:20:26.928868493Z" level=error msg="Failed to destroy network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:26.930599 containerd[1468]: time="2025-02-13T16:20:26.929241471Z" level=error msg="encountered an error cleaning up failed sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:26.930599 containerd[1468]: time="2025-02-13T16:20:26.929333499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:26.930889 kubelet[1777]: E0213 16:20:26.928750 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:26.930889 kubelet[1777]: E0213 16:20:26.928829 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:26.930889 kubelet[1777]: E0213 16:20:26.928859 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:26.930986 kubelet[1777]: E0213 16:20:26.928918 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:26.930986 kubelet[1777]: E0213 16:20:26.930471 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:26.930986 kubelet[1777]: E0213 16:20:26.930521 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:26.931122 kubelet[1777]: E0213 16:20:26.930551 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:26.931346 kubelet[1777]: E0213 16:20:26.931214 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ffj9b" podUID="71a71e38-f3cb-4eae-b44c-ca79644fd5ec" Feb 13 16:20:27.550446 kubelet[1777]: E0213 16:20:27.550392 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:27.712223 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51-shm.mount: Deactivated successfully. Feb 13 16:20:27.759822 kubelet[1777]: I0213 16:20:27.759611 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51" Feb 13 16:20:27.762338 containerd[1468]: time="2025-02-13T16:20:27.762029651Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:20:27.762338 containerd[1468]: time="2025-02-13T16:20:27.762232623Z" level=info msg="Ensure that sandbox c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51 in task-service has been cleanup successfully" Feb 13 16:20:27.765791 kubelet[1777]: I0213 16:20:27.765710 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8" Feb 13 16:20:27.767649 containerd[1468]: time="2025-02-13T16:20:27.767244229Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:20:27.769445 containerd[1468]: time="2025-02-13T16:20:27.767675349Z" level=info msg="Ensure that sandbox eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8 in task-service has been cleanup successfully" Feb 13 16:20:27.769445 containerd[1468]: time="2025-02-13T16:20:27.768001825Z" level=info msg="TearDown network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" successfully" Feb 13 16:20:27.769445 containerd[1468]: time="2025-02-13T16:20:27.768028761Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" returns successfully" Feb 13 16:20:27.770255 systemd[1]: run-netns-cni\x2df295d752\x2dc4aa\x2de4e9\x2dfef8\x2d841e22259c2b.mount: Deactivated successfully. Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.771339212Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.771404718Z" level=info msg="TearDown network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" successfully" Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.771423254Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" returns successfully" Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.771617312Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.771629272Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.772110313Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.772225375Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.772240639Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:20:27.775872 containerd[1468]: time="2025-02-13T16:20:27.773072646Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:2,}" Feb 13 16:20:27.778158 containerd[1468]: time="2025-02-13T16:20:27.777598901Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:20:27.778158 containerd[1468]: time="2025-02-13T16:20:27.777698717Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:20:27.778158 containerd[1468]: time="2025-02-13T16:20:27.777710110Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:20:27.779113 containerd[1468]: time="2025-02-13T16:20:27.779037949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:3,}" Feb 13 16:20:27.780770 systemd[1]: run-netns-cni\x2d14cd2a6b\x2de058\x2d1021\x2d9727\x2d25414f416e8a.mount: Deactivated successfully. Feb 13 16:20:27.979401 containerd[1468]: time="2025-02-13T16:20:27.979196151Z" level=error msg="Failed to destroy network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:27.980657 containerd[1468]: time="2025-02-13T16:20:27.980590869Z" level=error msg="encountered an error cleaning up failed sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:27.981671 containerd[1468]: time="2025-02-13T16:20:27.981315718Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:27.982583 kubelet[1777]: E0213 16:20:27.982543 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:27.982940 kubelet[1777]: E0213 16:20:27.982906 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:27.983556 kubelet[1777]: E0213 16:20:27.983086 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:27.983556 kubelet[1777]: E0213 16:20:27.983252 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:28.004625 containerd[1468]: time="2025-02-13T16:20:28.004519752Z" level=error msg="Failed to destroy network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:28.005745 containerd[1468]: time="2025-02-13T16:20:28.005478403Z" level=error msg="encountered an error cleaning up failed sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:28.005745 containerd[1468]: time="2025-02-13T16:20:28.005630235Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:28.006375 kubelet[1777]: E0213 16:20:28.006079 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:28.006375 kubelet[1777]: E0213 16:20:28.006214 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:28.006375 kubelet[1777]: E0213 16:20:28.006257 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:28.006919 kubelet[1777]: E0213 16:20:28.006394 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ffj9b" podUID="71a71e38-f3cb-4eae-b44c-ca79644fd5ec" Feb 13 16:20:28.265569 systemd-resolved[1321]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Feb 13 16:20:28.552429 kubelet[1777]: E0213 16:20:28.552301 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:28.703963 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a-shm.mount: Deactivated successfully. Feb 13 16:20:28.704651 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4-shm.mount: Deactivated successfully. Feb 13 16:20:28.773340 kubelet[1777]: I0213 16:20:28.772423 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4" Feb 13 16:20:28.773519 containerd[1468]: time="2025-02-13T16:20:28.773482227Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" Feb 13 16:20:28.774088 containerd[1468]: time="2025-02-13T16:20:28.773706483Z" level=info msg="Ensure that sandbox be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4 in task-service has been cleanup successfully" Feb 13 16:20:28.774088 containerd[1468]: time="2025-02-13T16:20:28.773984528Z" level=info msg="TearDown network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" successfully" Feb 13 16:20:28.774088 containerd[1468]: time="2025-02-13T16:20:28.774010614Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" returns successfully" Feb 13 16:20:28.777001 containerd[1468]: time="2025-02-13T16:20:28.776937050Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:20:28.777193 containerd[1468]: time="2025-02-13T16:20:28.777093135Z" level=info msg="TearDown network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" successfully" Feb 13 16:20:28.777193 containerd[1468]: time="2025-02-13T16:20:28.777106891Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" returns successfully" Feb 13 16:20:28.778239 systemd[1]: run-netns-cni\x2d20975fba\x2dcbb8\x2d0f39\x2dd966\x2d4d6590467a19.mount: Deactivated successfully. Feb 13 16:20:28.780102 containerd[1468]: time="2025-02-13T16:20:28.779059436Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:20:28.780102 containerd[1468]: time="2025-02-13T16:20:28.779184277Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:20:28.780102 containerd[1468]: time="2025-02-13T16:20:28.779197169Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:20:28.782121 containerd[1468]: time="2025-02-13T16:20:28.782058630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:3,}" Feb 13 16:20:28.789738 kubelet[1777]: I0213 16:20:28.789694 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a" Feb 13 16:20:28.791466 containerd[1468]: time="2025-02-13T16:20:28.790816055Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" Feb 13 16:20:28.791466 containerd[1468]: time="2025-02-13T16:20:28.791218258Z" level=info msg="Ensure that sandbox 734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a in task-service has been cleanup successfully" Feb 13 16:20:28.794378 containerd[1468]: time="2025-02-13T16:20:28.791847966Z" level=info msg="TearDown network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" successfully" Feb 13 16:20:28.794587 containerd[1468]: time="2025-02-13T16:20:28.794554393Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" returns successfully" Feb 13 16:20:28.796471 systemd[1]: run-netns-cni\x2d5112c7c3\x2d935d\x2dfadb\x2d89e5\x2d985f9e3763c6.mount: Deactivated successfully. Feb 13 16:20:28.798386 containerd[1468]: time="2025-02-13T16:20:28.798337821Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:20:28.798774 containerd[1468]: time="2025-02-13T16:20:28.798675693Z" level=info msg="TearDown network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" successfully" Feb 13 16:20:28.798922 containerd[1468]: time="2025-02-13T16:20:28.798902210Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" returns successfully" Feb 13 16:20:28.800062 containerd[1468]: time="2025-02-13T16:20:28.799968827Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:20:28.800201 containerd[1468]: time="2025-02-13T16:20:28.800177435Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:20:28.800201 containerd[1468]: time="2025-02-13T16:20:28.800198467Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:20:28.810555 containerd[1468]: time="2025-02-13T16:20:28.810334867Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:20:28.811815 containerd[1468]: time="2025-02-13T16:20:28.811447081Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:20:28.811815 containerd[1468]: time="2025-02-13T16:20:28.811742974Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:20:28.819828 containerd[1468]: time="2025-02-13T16:20:28.819641014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:4,}" Feb 13 16:20:28.954650 containerd[1468]: time="2025-02-13T16:20:28.954501073Z" level=error msg="Failed to destroy network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:28.955224 containerd[1468]: time="2025-02-13T16:20:28.955021853Z" level=error msg="encountered an error cleaning up failed sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:28.955224 containerd[1468]: time="2025-02-13T16:20:28.955111303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:28.955498 kubelet[1777]: E0213 16:20:28.955449 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:28.955566 kubelet[1777]: E0213 16:20:28.955539 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:28.955616 kubelet[1777]: E0213 16:20:28.955574 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:28.955672 kubelet[1777]: E0213 16:20:28.955632 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ffj9b" podUID="71a71e38-f3cb-4eae-b44c-ca79644fd5ec" Feb 13 16:20:29.004381 containerd[1468]: time="2025-02-13T16:20:29.004200280Z" level=error msg="Failed to destroy network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:29.004835 containerd[1468]: time="2025-02-13T16:20:29.004716088Z" level=error msg="encountered an error cleaning up failed sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:29.005080 containerd[1468]: time="2025-02-13T16:20:29.004830977Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:29.005180 kubelet[1777]: E0213 16:20:29.005122 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:29.005554 kubelet[1777]: E0213 16:20:29.005204 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:29.005554 kubelet[1777]: E0213 16:20:29.005290 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:29.005554 kubelet[1777]: E0213 16:20:29.005353 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:29.553559 kubelet[1777]: E0213 16:20:29.553417 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:29.703257 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2-shm.mount: Deactivated successfully. Feb 13 16:20:29.797369 kubelet[1777]: I0213 16:20:29.797314 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0" Feb 13 16:20:29.798297 containerd[1468]: time="2025-02-13T16:20:29.798137075Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\"" Feb 13 16:20:29.801352 containerd[1468]: time="2025-02-13T16:20:29.799058335Z" level=info msg="Ensure that sandbox 6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0 in task-service has been cleanup successfully" Feb 13 16:20:29.801914 containerd[1468]: time="2025-02-13T16:20:29.801613977Z" level=info msg="TearDown network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" successfully" Feb 13 16:20:29.801914 containerd[1468]: time="2025-02-13T16:20:29.801665820Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" returns successfully" Feb 13 16:20:29.802961 containerd[1468]: time="2025-02-13T16:20:29.802509555Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" Feb 13 16:20:29.802696 systemd[1]: run-netns-cni\x2d953fb4b8\x2da33d\x2de29e\x2dac7c\x2d71b97bed8b95.mount: Deactivated successfully. Feb 13 16:20:29.804206 containerd[1468]: time="2025-02-13T16:20:29.804102906Z" level=info msg="TearDown network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" successfully" Feb 13 16:20:29.804508 containerd[1468]: time="2025-02-13T16:20:29.804354797Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" returns successfully" Feb 13 16:20:29.806934 containerd[1468]: time="2025-02-13T16:20:29.806830360Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:20:29.807461 containerd[1468]: time="2025-02-13T16:20:29.807212733Z" level=info msg="TearDown network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" successfully" Feb 13 16:20:29.807461 containerd[1468]: time="2025-02-13T16:20:29.807240529Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" returns successfully" Feb 13 16:20:29.808074 containerd[1468]: time="2025-02-13T16:20:29.808048374Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:20:29.808517 containerd[1468]: time="2025-02-13T16:20:29.808292272Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:20:29.808517 containerd[1468]: time="2025-02-13T16:20:29.808317331Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:20:29.808897 kubelet[1777]: I0213 16:20:29.808868 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2" Feb 13 16:20:29.810790 containerd[1468]: time="2025-02-13T16:20:29.810505382Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\"" Feb 13 16:20:29.810896 containerd[1468]: time="2025-02-13T16:20:29.810799353Z" level=info msg="Ensure that sandbox dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2 in task-service has been cleanup successfully" Feb 13 16:20:29.813301 containerd[1468]: time="2025-02-13T16:20:29.811658485Z" level=info msg="TearDown network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" successfully" Feb 13 16:20:29.813301 containerd[1468]: time="2025-02-13T16:20:29.811688350Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" returns successfully" Feb 13 16:20:29.813819 systemd[1]: run-netns-cni\x2da6ecaa11\x2d19f7\x2d9277\x2da9bc\x2d1c1769e2872e.mount: Deactivated successfully. Feb 13 16:20:29.815346 containerd[1468]: time="2025-02-13T16:20:29.814015027Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:20:29.815346 containerd[1468]: time="2025-02-13T16:20:29.814159129Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:20:29.815346 containerd[1468]: time="2025-02-13T16:20:29.814231407Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:20:29.815346 containerd[1468]: time="2025-02-13T16:20:29.814640636Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" Feb 13 16:20:29.815346 containerd[1468]: time="2025-02-13T16:20:29.814762388Z" level=info msg="TearDown network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" successfully" Feb 13 16:20:29.815346 containerd[1468]: time="2025-02-13T16:20:29.814793090Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" returns successfully" Feb 13 16:20:29.817593 containerd[1468]: time="2025-02-13T16:20:29.816111341Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:20:29.817593 containerd[1468]: time="2025-02-13T16:20:29.816223914Z" level=info msg="TearDown network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" successfully" Feb 13 16:20:29.817593 containerd[1468]: time="2025-02-13T16:20:29.816234748Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" returns successfully" Feb 13 16:20:29.817593 containerd[1468]: time="2025-02-13T16:20:29.816943925Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:20:29.817593 containerd[1468]: time="2025-02-13T16:20:29.817066261Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:20:29.817593 containerd[1468]: time="2025-02-13T16:20:29.817085631Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:20:29.818487 containerd[1468]: time="2025-02-13T16:20:29.818444026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:4,}" Feb 13 16:20:29.819171 containerd[1468]: time="2025-02-13T16:20:29.819050400Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:5,}" Feb 13 16:20:30.013881 containerd[1468]: time="2025-02-13T16:20:30.013566131Z" level=error msg="Failed to destroy network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:30.014180 containerd[1468]: time="2025-02-13T16:20:30.014043724Z" level=error msg="encountered an error cleaning up failed sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:30.014180 containerd[1468]: time="2025-02-13T16:20:30.014142863Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:30.014877 kubelet[1777]: E0213 16:20:30.014602 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:30.014877 kubelet[1777]: E0213 16:20:30.014700 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:30.014877 kubelet[1777]: E0213 16:20:30.014734 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:30.015185 kubelet[1777]: E0213 16:20:30.014808 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ffj9b" podUID="71a71e38-f3cb-4eae-b44c-ca79644fd5ec" Feb 13 16:20:30.017826 containerd[1468]: time="2025-02-13T16:20:30.017733006Z" level=error msg="Failed to destroy network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:30.019043 containerd[1468]: time="2025-02-13T16:20:30.018859227Z" level=error msg="encountered an error cleaning up failed sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:30.019043 containerd[1468]: time="2025-02-13T16:20:30.018955116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:30.019407 kubelet[1777]: E0213 16:20:30.019321 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:30.020170 kubelet[1777]: E0213 16:20:30.019624 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:30.020403 kubelet[1777]: E0213 16:20:30.020341 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:30.020561 kubelet[1777]: E0213 16:20:30.020449 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:30.555474 kubelet[1777]: E0213 16:20:30.555404 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:30.707393 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5-shm.mount: Deactivated successfully. Feb 13 16:20:30.822213 kubelet[1777]: I0213 16:20:30.821966 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce" Feb 13 16:20:30.828321 containerd[1468]: time="2025-02-13T16:20:30.825715926Z" level=info msg="StopPodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\"" Feb 13 16:20:30.828321 containerd[1468]: time="2025-02-13T16:20:30.826096680Z" level=info msg="Ensure that sandbox 24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce in task-service has been cleanup successfully" Feb 13 16:20:30.832595 systemd[1]: run-netns-cni\x2d157e0773\x2d83fc\x2d0749\x2dcac4\x2daa8c5a0647c3.mount: Deactivated successfully. Feb 13 16:20:30.838912 containerd[1468]: time="2025-02-13T16:20:30.838225279Z" level=info msg="TearDown network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" successfully" Feb 13 16:20:30.838912 containerd[1468]: time="2025-02-13T16:20:30.838343929Z" level=info msg="StopPodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" returns successfully" Feb 13 16:20:30.840446 containerd[1468]: time="2025-02-13T16:20:30.840390166Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\"" Feb 13 16:20:30.842555 containerd[1468]: time="2025-02-13T16:20:30.842483595Z" level=info msg="TearDown network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" successfully" Feb 13 16:20:30.842786 containerd[1468]: time="2025-02-13T16:20:30.842758180Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" returns successfully" Feb 13 16:20:30.843903 kubelet[1777]: I0213 16:20:30.843855 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5" Feb 13 16:20:30.844661 containerd[1468]: time="2025-02-13T16:20:30.844344098Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" Feb 13 16:20:30.844661 containerd[1468]: time="2025-02-13T16:20:30.844512612Z" level=info msg="TearDown network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" successfully" Feb 13 16:20:30.844661 containerd[1468]: time="2025-02-13T16:20:30.844534076Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" returns successfully" Feb 13 16:20:30.850951 containerd[1468]: time="2025-02-13T16:20:30.848649307Z" level=info msg="StopPodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\"" Feb 13 16:20:30.850951 containerd[1468]: time="2025-02-13T16:20:30.848977490Z" level=info msg="Ensure that sandbox 88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5 in task-service has been cleanup successfully" Feb 13 16:20:30.853960 containerd[1468]: time="2025-02-13T16:20:30.853408482Z" level=info msg="TearDown network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" successfully" Feb 13 16:20:30.853960 containerd[1468]: time="2025-02-13T16:20:30.853478547Z" level=info msg="StopPodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" returns successfully" Feb 13 16:20:30.853960 containerd[1468]: time="2025-02-13T16:20:30.853639278Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:20:30.853960 containerd[1468]: time="2025-02-13T16:20:30.853766575Z" level=info msg="TearDown network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" successfully" Feb 13 16:20:30.853960 containerd[1468]: time="2025-02-13T16:20:30.853786131Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" returns successfully" Feb 13 16:20:30.854898 containerd[1468]: time="2025-02-13T16:20:30.854149172Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\"" Feb 13 16:20:30.854898 containerd[1468]: time="2025-02-13T16:20:30.854369203Z" level=info msg="TearDown network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" successfully" Feb 13 16:20:30.854898 containerd[1468]: time="2025-02-13T16:20:30.854390379Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" returns successfully" Feb 13 16:20:30.856866 containerd[1468]: time="2025-02-13T16:20:30.856562642Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" Feb 13 16:20:30.856866 containerd[1468]: time="2025-02-13T16:20:30.856718780Z" level=info msg="TearDown network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" successfully" Feb 13 16:20:30.856866 containerd[1468]: time="2025-02-13T16:20:30.856738740Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" returns successfully" Feb 13 16:20:30.856822 systemd[1]: run-netns-cni\x2d888ab8de\x2d6a42\x2d1cc0\x2d795c\x2df201bb0ee757.mount: Deactivated successfully. Feb 13 16:20:30.858825 containerd[1468]: time="2025-02-13T16:20:30.857572231Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:20:30.858825 containerd[1468]: time="2025-02-13T16:20:30.857714902Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:20:30.858825 containerd[1468]: time="2025-02-13T16:20:30.857731864Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:20:30.859763 containerd[1468]: time="2025-02-13T16:20:30.859705857Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:20:30.859906 containerd[1468]: time="2025-02-13T16:20:30.859860744Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:20:30.859906 containerd[1468]: time="2025-02-13T16:20:30.859882035Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:20:30.860809 containerd[1468]: time="2025-02-13T16:20:30.860020448Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:20:30.860809 containerd[1468]: time="2025-02-13T16:20:30.860124868Z" level=info msg="TearDown network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" successfully" Feb 13 16:20:30.860809 containerd[1468]: time="2025-02-13T16:20:30.860140377Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" returns successfully" Feb 13 16:20:30.861790 containerd[1468]: time="2025-02-13T16:20:30.861754435Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:20:30.862694 containerd[1468]: time="2025-02-13T16:20:30.862610692Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:20:30.862694 containerd[1468]: time="2025-02-13T16:20:30.862638254Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:20:30.863241 containerd[1468]: time="2025-02-13T16:20:30.863075567Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:6,}" Feb 13 16:20:30.864248 containerd[1468]: time="2025-02-13T16:20:30.864218835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:5,}" Feb 13 16:20:31.074292 containerd[1468]: time="2025-02-13T16:20:31.074148674Z" level=error msg="Failed to destroy network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:31.078225 containerd[1468]: time="2025-02-13T16:20:31.078121915Z" level=error msg="encountered an error cleaning up failed sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:31.078871 containerd[1468]: time="2025-02-13T16:20:31.078834153Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:31.079387 kubelet[1777]: E0213 16:20:31.079342 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:31.080830 kubelet[1777]: E0213 16:20:31.080783 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:31.081023 kubelet[1777]: E0213 16:20:31.080971 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:31.081176 kubelet[1777]: E0213 16:20:31.081137 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:31.096106 containerd[1468]: time="2025-02-13T16:20:31.096048170Z" level=error msg="Failed to destroy network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:31.096811 containerd[1468]: time="2025-02-13T16:20:31.096755183Z" level=error msg="encountered an error cleaning up failed sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:31.097072 containerd[1468]: time="2025-02-13T16:20:31.097037652Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:31.097885 kubelet[1777]: E0213 16:20:31.097551 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:31.098125 kubelet[1777]: E0213 16:20:31.097861 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:31.098125 kubelet[1777]: E0213 16:20:31.098095 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:31.098857 kubelet[1777]: E0213 16:20:31.098469 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ffj9b" podUID="71a71e38-f3cb-4eae-b44c-ca79644fd5ec" Feb 13 16:20:31.521320 kubelet[1777]: E0213 16:20:31.520983 1777 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:31.556313 kubelet[1777]: E0213 16:20:31.556115 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:31.705517 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c-shm.mount: Deactivated successfully. Feb 13 16:20:31.705855 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e-shm.mount: Deactivated successfully. Feb 13 16:20:31.853178 kubelet[1777]: I0213 16:20:31.852330 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e" Feb 13 16:20:31.853882 containerd[1468]: time="2025-02-13T16:20:31.853433393Z" level=info msg="StopPodSandbox for \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\"" Feb 13 16:20:31.853882 containerd[1468]: time="2025-02-13T16:20:31.853713149Z" level=info msg="Ensure that sandbox 16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e in task-service has been cleanup successfully" Feb 13 16:20:31.855948 containerd[1468]: time="2025-02-13T16:20:31.855912733Z" level=info msg="TearDown network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\" successfully" Feb 13 16:20:31.855948 containerd[1468]: time="2025-02-13T16:20:31.855939898Z" level=info msg="StopPodSandbox for \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\" returns successfully" Feb 13 16:20:31.857988 containerd[1468]: time="2025-02-13T16:20:31.856221965Z" level=info msg="StopPodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\"" Feb 13 16:20:31.857988 containerd[1468]: time="2025-02-13T16:20:31.856316934Z" level=info msg="TearDown network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" successfully" Feb 13 16:20:31.857988 containerd[1468]: time="2025-02-13T16:20:31.856326997Z" level=info msg="StopPodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" returns successfully" Feb 13 16:20:31.857988 containerd[1468]: time="2025-02-13T16:20:31.856898923Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\"" Feb 13 16:20:31.857988 containerd[1468]: time="2025-02-13T16:20:31.857313863Z" level=info msg="TearDown network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" successfully" Feb 13 16:20:31.857988 containerd[1468]: time="2025-02-13T16:20:31.857368103Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" returns successfully" Feb 13 16:20:31.860365 kubelet[1777]: I0213 16:20:31.857150 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c" Feb 13 16:20:31.859258 systemd[1]: run-netns-cni\x2de0291c08\x2d2bec\x2db5c2\x2d4d79\x2d010086fe8055.mount: Deactivated successfully. Feb 13 16:20:31.860789 containerd[1468]: time="2025-02-13T16:20:31.859561526Z" level=info msg="StopPodSandbox for \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\"" Feb 13 16:20:31.860789 containerd[1468]: time="2025-02-13T16:20:31.859654478Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" Feb 13 16:20:31.860789 containerd[1468]: time="2025-02-13T16:20:31.859737679Z" level=info msg="TearDown network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" successfully" Feb 13 16:20:31.860789 containerd[1468]: time="2025-02-13T16:20:31.859748059Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" returns successfully" Feb 13 16:20:31.860789 containerd[1468]: time="2025-02-13T16:20:31.859821303Z" level=info msg="Ensure that sandbox fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c in task-service has been cleanup successfully" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.861502642Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.861689088Z" level=info msg="TearDown network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" successfully" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.861709117Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" returns successfully" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.862247451Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.863440481Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.863472129Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.864365194Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.864440291Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:20:31.865523 containerd[1468]: time="2025-02-13T16:20:31.864449840Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:20:31.863972 systemd[1]: run-netns-cni\x2d83ea3818\x2da471\x2d5246\x2d6c05\x2d5cc1ea0fba6c.mount: Deactivated successfully. Feb 13 16:20:31.866051 containerd[1468]: time="2025-02-13T16:20:31.865903938Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:7,}" Feb 13 16:20:31.868778 containerd[1468]: time="2025-02-13T16:20:31.868530337Z" level=info msg="TearDown network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\" successfully" Feb 13 16:20:31.868778 containerd[1468]: time="2025-02-13T16:20:31.868568087Z" level=info msg="StopPodSandbox for \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\" returns successfully" Feb 13 16:20:31.871528 containerd[1468]: time="2025-02-13T16:20:31.870611032Z" level=info msg="StopPodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\"" Feb 13 16:20:31.871528 containerd[1468]: time="2025-02-13T16:20:31.871407369Z" level=info msg="TearDown network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" successfully" Feb 13 16:20:31.871528 containerd[1468]: time="2025-02-13T16:20:31.871424433Z" level=info msg="StopPodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" returns successfully" Feb 13 16:20:31.872582 containerd[1468]: time="2025-02-13T16:20:31.872548638Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\"" Feb 13 16:20:31.872728 containerd[1468]: time="2025-02-13T16:20:31.872646816Z" level=info msg="TearDown network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" successfully" Feb 13 16:20:31.872728 containerd[1468]: time="2025-02-13T16:20:31.872657370Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" returns successfully" Feb 13 16:20:31.874833 containerd[1468]: time="2025-02-13T16:20:31.874590286Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" Feb 13 16:20:31.874833 containerd[1468]: time="2025-02-13T16:20:31.874758971Z" level=info msg="TearDown network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" successfully" Feb 13 16:20:31.874833 containerd[1468]: time="2025-02-13T16:20:31.874784396Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" returns successfully" Feb 13 16:20:31.876093 containerd[1468]: time="2025-02-13T16:20:31.875562100Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:20:31.876093 containerd[1468]: time="2025-02-13T16:20:31.875676043Z" level=info msg="TearDown network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" successfully" Feb 13 16:20:31.876093 containerd[1468]: time="2025-02-13T16:20:31.875691911Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" returns successfully" Feb 13 16:20:31.876647 containerd[1468]: time="2025-02-13T16:20:31.876619475Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:20:31.877018 containerd[1468]: time="2025-02-13T16:20:31.876905613Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:20:31.877018 containerd[1468]: time="2025-02-13T16:20:31.876927005Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:20:31.881433 containerd[1468]: time="2025-02-13T16:20:31.881330260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:6,}" Feb 13 16:20:32.057585 containerd[1468]: time="2025-02-13T16:20:32.057438925Z" level=error msg="Failed to destroy network for sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:32.058279 containerd[1468]: time="2025-02-13T16:20:32.058050529Z" level=error msg="encountered an error cleaning up failed sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:32.058279 containerd[1468]: time="2025-02-13T16:20:32.058124918Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:32.058563 kubelet[1777]: E0213 16:20:32.058466 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:32.058926 kubelet[1777]: E0213 16:20:32.058562 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:32.058926 kubelet[1777]: E0213 16:20:32.058587 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2tkx2" Feb 13 16:20:32.058926 kubelet[1777]: E0213 16:20:32.058646 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2tkx2_calico-system(6b5f959f-a7e4-4515-a03d-ec0af7f0538c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2tkx2" podUID="6b5f959f-a7e4-4515-a03d-ec0af7f0538c" Feb 13 16:20:32.064453 containerd[1468]: time="2025-02-13T16:20:32.064193851Z" level=error msg="Failed to destroy network for sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:32.065453 containerd[1468]: time="2025-02-13T16:20:32.065407152Z" level=error msg="encountered an error cleaning up failed sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:32.065812 containerd[1468]: time="2025-02-13T16:20:32.065653444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:32.066258 kubelet[1777]: E0213 16:20:32.066206 1777 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 16:20:32.066626 kubelet[1777]: E0213 16:20:32.066565 1777 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:32.066690 kubelet[1777]: E0213 16:20:32.066634 1777 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-ffj9b" Feb 13 16:20:32.066732 kubelet[1777]: E0213 16:20:32.066700 1777 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-ffj9b_default(71a71e38-f3cb-4eae-b44c-ca79644fd5ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-ffj9b" podUID="71a71e38-f3cb-4eae-b44c-ca79644fd5ec" Feb 13 16:20:32.070392 containerd[1468]: time="2025-02-13T16:20:32.069546447Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:32.070392 containerd[1468]: time="2025-02-13T16:20:32.070339725Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 16:20:32.070907 containerd[1468]: time="2025-02-13T16:20:32.070806487Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:32.074029 containerd[1468]: time="2025-02-13T16:20:32.073964055Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:32.075920 containerd[1468]: time="2025-02-13T16:20:32.075824934Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 7.344953262s" Feb 13 16:20:32.075920 containerd[1468]: time="2025-02-13T16:20:32.075898842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 16:20:32.100381 containerd[1468]: time="2025-02-13T16:20:32.100179991Z" level=info msg="CreateContainer within sandbox \"e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 16:20:32.116499 containerd[1468]: time="2025-02-13T16:20:32.116364884Z" level=info msg="CreateContainer within sandbox \"e466c0f56fde722f4dbfc544c7c81edb32e55ca97d9b58643db973dc6015f286\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"e541acb9cc210f5f907eaedec2d4064477db733c56c5418f8cf0abfec7010697\"" Feb 13 16:20:32.120405 containerd[1468]: time="2025-02-13T16:20:32.119231766Z" level=info msg="StartContainer for \"e541acb9cc210f5f907eaedec2d4064477db733c56c5418f8cf0abfec7010697\"" Feb 13 16:20:32.217919 systemd[1]: Started cri-containerd-e541acb9cc210f5f907eaedec2d4064477db733c56c5418f8cf0abfec7010697.scope - libcontainer container e541acb9cc210f5f907eaedec2d4064477db733c56c5418f8cf0abfec7010697. Feb 13 16:20:32.271723 containerd[1468]: time="2025-02-13T16:20:32.270838395Z" level=info msg="StartContainer for \"e541acb9cc210f5f907eaedec2d4064477db733c56c5418f8cf0abfec7010697\" returns successfully" Feb 13 16:20:32.369870 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 16:20:32.370067 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 16:20:32.556551 kubelet[1777]: E0213 16:20:32.556490 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:32.709036 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f-shm.mount: Deactivated successfully. Feb 13 16:20:32.709468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3098213569.mount: Deactivated successfully. Feb 13 16:20:32.868537 kubelet[1777]: E0213 16:20:32.867649 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:32.876820 kubelet[1777]: I0213 16:20:32.876789 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f" Feb 13 16:20:32.877859 containerd[1468]: time="2025-02-13T16:20:32.877823878Z" level=info msg="StopPodSandbox for \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\"" Feb 13 16:20:32.878801 containerd[1468]: time="2025-02-13T16:20:32.878771563Z" level=info msg="Ensure that sandbox 5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f in task-service has been cleanup successfully" Feb 13 16:20:32.880441 containerd[1468]: time="2025-02-13T16:20:32.880373408Z" level=info msg="TearDown network for sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\" successfully" Feb 13 16:20:32.881419 containerd[1468]: time="2025-02-13T16:20:32.881328393Z" level=info msg="StopPodSandbox for \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\" returns successfully" Feb 13 16:20:32.882141 systemd[1]: run-netns-cni\x2d29937c58\x2d83f4\x2d0831\x2d3942\x2d3b70af77bfdb.mount: Deactivated successfully. Feb 13 16:20:32.883952 containerd[1468]: time="2025-02-13T16:20:32.882688581Z" level=info msg="StopPodSandbox for \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\"" Feb 13 16:20:32.883952 containerd[1468]: time="2025-02-13T16:20:32.882822352Z" level=info msg="TearDown network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\" successfully" Feb 13 16:20:32.883952 containerd[1468]: time="2025-02-13T16:20:32.882842063Z" level=info msg="StopPodSandbox for \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\" returns successfully" Feb 13 16:20:32.884888 containerd[1468]: time="2025-02-13T16:20:32.884851342Z" level=info msg="StopPodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\"" Feb 13 16:20:32.885024 containerd[1468]: time="2025-02-13T16:20:32.884962542Z" level=info msg="TearDown network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" successfully" Feb 13 16:20:32.885024 containerd[1468]: time="2025-02-13T16:20:32.884974130Z" level=info msg="StopPodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" returns successfully" Feb 13 16:20:32.885512 containerd[1468]: time="2025-02-13T16:20:32.885488682Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\"" Feb 13 16:20:32.885839 containerd[1468]: time="2025-02-13T16:20:32.885765044Z" level=info msg="TearDown network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" successfully" Feb 13 16:20:32.885839 containerd[1468]: time="2025-02-13T16:20:32.885781728Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" returns successfully" Feb 13 16:20:32.885973 kubelet[1777]: I0213 16:20:32.885936 1777 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f" Feb 13 16:20:32.888285 containerd[1468]: time="2025-02-13T16:20:32.887753499Z" level=info msg="StopPodSandbox for \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\"" Feb 13 16:20:32.888285 containerd[1468]: time="2025-02-13T16:20:32.887906295Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" Feb 13 16:20:32.888285 containerd[1468]: time="2025-02-13T16:20:32.888026194Z" level=info msg="TearDown network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" successfully" Feb 13 16:20:32.888285 containerd[1468]: time="2025-02-13T16:20:32.888053995Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" returns successfully" Feb 13 16:20:32.888285 containerd[1468]: time="2025-02-13T16:20:32.888121942Z" level=info msg="Ensure that sandbox 861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f in task-service has been cleanup successfully" Feb 13 16:20:32.891124 systemd[1]: run-netns-cni\x2d14db2df1\x2d626f\x2dd10e\x2d361a\x2d36edce9be571.mount: Deactivated successfully. Feb 13 16:20:32.891714 containerd[1468]: time="2025-02-13T16:20:32.891517044Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:20:32.893807 containerd[1468]: time="2025-02-13T16:20:32.893766839Z" level=info msg="TearDown network for sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\" successfully" Feb 13 16:20:32.893948 containerd[1468]: time="2025-02-13T16:20:32.893935543Z" level=info msg="StopPodSandbox for \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\" returns successfully" Feb 13 16:20:32.896397 containerd[1468]: time="2025-02-13T16:20:32.896354575Z" level=info msg="TearDown network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" successfully" Feb 13 16:20:32.896716 containerd[1468]: time="2025-02-13T16:20:32.896595760Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" returns successfully" Feb 13 16:20:32.896901 containerd[1468]: time="2025-02-13T16:20:32.896884335Z" level=info msg="StopPodSandbox for \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\"" Feb 13 16:20:32.897241 containerd[1468]: time="2025-02-13T16:20:32.896963490Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:20:32.897241 containerd[1468]: time="2025-02-13T16:20:32.897124456Z" level=info msg="TearDown network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\" successfully" Feb 13 16:20:32.897241 containerd[1468]: time="2025-02-13T16:20:32.897142429Z" level=info msg="StopPodSandbox for \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\" returns successfully" Feb 13 16:20:32.898131 containerd[1468]: time="2025-02-13T16:20:32.897391918Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:20:32.898131 containerd[1468]: time="2025-02-13T16:20:32.897408762Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:20:32.898131 containerd[1468]: time="2025-02-13T16:20:32.897623611Z" level=info msg="StopPodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\"" Feb 13 16:20:32.898131 containerd[1468]: time="2025-02-13T16:20:32.897707032Z" level=info msg="TearDown network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" successfully" Feb 13 16:20:32.898131 containerd[1468]: time="2025-02-13T16:20:32.897717314Z" level=info msg="StopPodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" returns successfully" Feb 13 16:20:32.898131 containerd[1468]: time="2025-02-13T16:20:32.898037444Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:20:32.898356 containerd[1468]: time="2025-02-13T16:20:32.898155150Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:20:32.898356 containerd[1468]: time="2025-02-13T16:20:32.898165376Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:20:32.898356 containerd[1468]: time="2025-02-13T16:20:32.898233699Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\"" Feb 13 16:20:32.898356 containerd[1468]: time="2025-02-13T16:20:32.898312761Z" level=info msg="TearDown network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" successfully" Feb 13 16:20:32.898356 containerd[1468]: time="2025-02-13T16:20:32.898324110Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" returns successfully" Feb 13 16:20:32.898911 containerd[1468]: time="2025-02-13T16:20:32.898882069Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:8,}" Feb 13 16:20:32.899384 containerd[1468]: time="2025-02-13T16:20:32.899362152Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" Feb 13 16:20:32.899549 containerd[1468]: time="2025-02-13T16:20:32.899535318Z" level=info msg="TearDown network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" successfully" Feb 13 16:20:32.899613 containerd[1468]: time="2025-02-13T16:20:32.899602275Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" returns successfully" Feb 13 16:20:32.899981 containerd[1468]: time="2025-02-13T16:20:32.899952737Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:20:32.900066 containerd[1468]: time="2025-02-13T16:20:32.900044102Z" level=info msg="TearDown network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" successfully" Feb 13 16:20:32.900119 containerd[1468]: time="2025-02-13T16:20:32.900070212Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" returns successfully" Feb 13 16:20:32.900485 containerd[1468]: time="2025-02-13T16:20:32.900461879Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:20:32.900678 containerd[1468]: time="2025-02-13T16:20:32.900556863Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:20:32.900678 containerd[1468]: time="2025-02-13T16:20:32.900570076Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:20:32.901341 containerd[1468]: time="2025-02-13T16:20:32.901143165Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:7,}" Feb 13 16:20:32.938738 kubelet[1777]: I0213 16:20:32.938430 1777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-q2p5s" podStartSLOduration=2.594983902 podStartE2EDuration="20.938391876s" podCreationTimestamp="2025-02-13 16:20:12 +0000 UTC" firstStartedPulling="2025-02-13 16:20:13.733830177 +0000 UTC m=+2.844797817" lastFinishedPulling="2025-02-13 16:20:32.077238149 +0000 UTC m=+21.188205791" observedRunningTime="2025-02-13 16:20:32.934447507 +0000 UTC m=+22.045415166" watchObservedRunningTime="2025-02-13 16:20:32.938391876 +0000 UTC m=+22.049359535" Feb 13 16:20:33.342778 systemd-networkd[1367]: cali8b948aa6fb7: Link UP Feb 13 16:20:33.346199 systemd-networkd[1367]: cali8b948aa6fb7: Gained carrier Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:32.995 [INFO][2827] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.091 [INFO][2827] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.141.99-k8s-csi--node--driver--2tkx2-eth0 csi-node-driver- calico-system 6b5f959f-a7e4-4515-a03d-ec0af7f0538c 944 0 2025-02-13 16:20:11 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 146.190.141.99 csi-node-driver-2tkx2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8b948aa6fb7 [] []}} ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Namespace="calico-system" Pod="csi-node-driver-2tkx2" WorkloadEndpoint="146.190.141.99-k8s-csi--node--driver--2tkx2-" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.091 [INFO][2827] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Namespace="calico-system" Pod="csi-node-driver-2tkx2" WorkloadEndpoint="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.161 [INFO][2857] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" HandleID="k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Workload="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.209 [INFO][2857] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" HandleID="k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Workload="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000050750), Attrs:map[string]string{"namespace":"calico-system", "node":"146.190.141.99", "pod":"csi-node-driver-2tkx2", "timestamp":"2025-02-13 16:20:33.161016553 +0000 UTC"}, Hostname:"146.190.141.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.210 [INFO][2857] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.210 [INFO][2857] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.210 [INFO][2857] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.141.99' Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.225 [INFO][2857] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.243 [INFO][2857] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.257 [INFO][2857] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.261 [INFO][2857] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.266 [INFO][2857] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.266 [INFO][2857] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.269 [INFO][2857] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.296 [INFO][2857] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.325 [INFO][2857] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.129/26] block=192.168.52.128/26 handle="k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.325 [INFO][2857] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.129/26] handle="k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" host="146.190.141.99" Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.325 [INFO][2857] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:20:33.390203 containerd[1468]: 2025-02-13 16:20:33.325 [INFO][2857] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.129/26] IPv6=[] ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" HandleID="k8s-pod-network.c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Workload="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" Feb 13 16:20:33.391308 containerd[1468]: 2025-02-13 16:20:33.328 [INFO][2827] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Namespace="calico-system" Pod="csi-node-driver-2tkx2" WorkloadEndpoint="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.141.99-k8s-csi--node--driver--2tkx2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b5f959f-a7e4-4515-a03d-ec0af7f0538c", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.141.99", ContainerID:"", Pod:"csi-node-driver-2tkx2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8b948aa6fb7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:20:33.391308 containerd[1468]: 2025-02-13 16:20:33.328 [INFO][2827] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.129/32] ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Namespace="calico-system" Pod="csi-node-driver-2tkx2" WorkloadEndpoint="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" Feb 13 16:20:33.391308 containerd[1468]: 2025-02-13 16:20:33.328 [INFO][2827] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8b948aa6fb7 ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Namespace="calico-system" Pod="csi-node-driver-2tkx2" WorkloadEndpoint="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" Feb 13 16:20:33.391308 containerd[1468]: 2025-02-13 16:20:33.347 [INFO][2827] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Namespace="calico-system" Pod="csi-node-driver-2tkx2" WorkloadEndpoint="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" Feb 13 16:20:33.391308 containerd[1468]: 2025-02-13 16:20:33.347 [INFO][2827] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Namespace="calico-system" Pod="csi-node-driver-2tkx2" WorkloadEndpoint="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.141.99-k8s-csi--node--driver--2tkx2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b5f959f-a7e4-4515-a03d-ec0af7f0538c", ResourceVersion:"944", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 20, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.141.99", ContainerID:"c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c", Pod:"csi-node-driver-2tkx2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.52.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8b948aa6fb7", MAC:"16:73:62:e9:2f:23", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:20:33.391308 containerd[1468]: 2025-02-13 16:20:33.388 [INFO][2827] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c" Namespace="calico-system" Pod="csi-node-driver-2tkx2" WorkloadEndpoint="146.190.141.99-k8s-csi--node--driver--2tkx2-eth0" Feb 13 16:20:33.424059 containerd[1468]: time="2025-02-13T16:20:33.423472189Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:20:33.424059 containerd[1468]: time="2025-02-13T16:20:33.423587108Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:20:33.424059 containerd[1468]: time="2025-02-13T16:20:33.423618838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:33.424059 containerd[1468]: time="2025-02-13T16:20:33.423816078Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:33.446741 systemd[1]: Started cri-containerd-c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c.scope - libcontainer container c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c. Feb 13 16:20:33.479723 containerd[1468]: time="2025-02-13T16:20:33.479672962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2tkx2,Uid:6b5f959f-a7e4-4515-a03d-ec0af7f0538c,Namespace:calico-system,Attempt:8,} returns sandbox id \"c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c\"" Feb 13 16:20:33.482706 containerd[1468]: time="2025-02-13T16:20:33.482663396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 16:20:33.523642 systemd-networkd[1367]: califb3454bf7a9: Link UP Feb 13 16:20:33.524914 systemd-networkd[1367]: califb3454bf7a9: Gained carrier Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.026 [INFO][2842] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.091 [INFO][2842] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0 nginx-deployment-8587fbcb89- default 71a71e38-f3cb-4eae-b44c-ca79644fd5ec 1039 0 2025-02-13 16:20:25 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 146.190.141.99 nginx-deployment-8587fbcb89-ffj9b eth0 default [] [] [kns.default ksa.default.default] califb3454bf7a9 [] []}} ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Namespace="default" Pod="nginx-deployment-8587fbcb89-ffj9b" WorkloadEndpoint="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.091 [INFO][2842] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Namespace="default" Pod="nginx-deployment-8587fbcb89-ffj9b" WorkloadEndpoint="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.176 [INFO][2861] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" HandleID="k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Workload="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.227 [INFO][2861] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" HandleID="k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Workload="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003199b0), Attrs:map[string]string{"namespace":"default", "node":"146.190.141.99", "pod":"nginx-deployment-8587fbcb89-ffj9b", "timestamp":"2025-02-13 16:20:33.176029835 +0000 UTC"}, Hostname:"146.190.141.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.227 [INFO][2861] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.325 [INFO][2861] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.325 [INFO][2861] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.141.99' Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.333 [INFO][2861] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.361 [INFO][2861] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.409 [INFO][2861] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.431 [INFO][2861] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.439 [INFO][2861] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.440 [INFO][2861] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.447 [INFO][2861] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2 Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.481 [INFO][2861] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.517 [INFO][2861] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.130/26] block=192.168.52.128/26 handle="k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.517 [INFO][2861] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.130/26] handle="k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" host="146.190.141.99" Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.517 [INFO][2861] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:20:33.555812 containerd[1468]: 2025-02-13 16:20:33.517 [INFO][2861] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.130/26] IPv6=[] ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" HandleID="k8s-pod-network.718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Workload="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" Feb 13 16:20:33.556960 containerd[1468]: 2025-02-13 16:20:33.520 [INFO][2842] cni-plugin/k8s.go 386: Populated endpoint ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Namespace="default" Pod="nginx-deployment-8587fbcb89-ffj9b" WorkloadEndpoint="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"71a71e38-f3cb-4eae-b44c-ca79644fd5ec", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.141.99", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-ffj9b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califb3454bf7a9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:20:33.556960 containerd[1468]: 2025-02-13 16:20:33.520 [INFO][2842] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.130/32] ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Namespace="default" Pod="nginx-deployment-8587fbcb89-ffj9b" WorkloadEndpoint="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" Feb 13 16:20:33.556960 containerd[1468]: 2025-02-13 16:20:33.520 [INFO][2842] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califb3454bf7a9 ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Namespace="default" Pod="nginx-deployment-8587fbcb89-ffj9b" WorkloadEndpoint="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" Feb 13 16:20:33.556960 containerd[1468]: 2025-02-13 16:20:33.525 [INFO][2842] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Namespace="default" Pod="nginx-deployment-8587fbcb89-ffj9b" WorkloadEndpoint="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" Feb 13 16:20:33.556960 containerd[1468]: 2025-02-13 16:20:33.526 [INFO][2842] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Namespace="default" Pod="nginx-deployment-8587fbcb89-ffj9b" WorkloadEndpoint="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"71a71e38-f3cb-4eae-b44c-ca79644fd5ec", ResourceVersion:"1039", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 20, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.141.99", ContainerID:"718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2", Pod:"nginx-deployment-8587fbcb89-ffj9b", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"califb3454bf7a9", MAC:"3a:fd:9e:15:bd:af", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:20:33.556960 containerd[1468]: 2025-02-13 16:20:33.553 [INFO][2842] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2" Namespace="default" Pod="nginx-deployment-8587fbcb89-ffj9b" WorkloadEndpoint="146.190.141.99-k8s-nginx--deployment--8587fbcb89--ffj9b-eth0" Feb 13 16:20:33.557544 kubelet[1777]: E0213 16:20:33.557490 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:33.584052 containerd[1468]: time="2025-02-13T16:20:33.583895003Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:20:33.584052 containerd[1468]: time="2025-02-13T16:20:33.583958065Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:20:33.584052 containerd[1468]: time="2025-02-13T16:20:33.583972672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:33.584589 containerd[1468]: time="2025-02-13T16:20:33.584100095Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:33.603489 systemd[1]: Started cri-containerd-718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2.scope - libcontainer container 718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2. Feb 13 16:20:33.656051 containerd[1468]: time="2025-02-13T16:20:33.655992971Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-ffj9b,Uid:71a71e38-f3cb-4eae-b44c-ca79644fd5ec,Namespace:default,Attempt:7,} returns sandbox id \"718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2\"" Feb 13 16:20:33.897353 kubelet[1777]: E0213 16:20:33.896740 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:33.922452 systemd[1]: run-containerd-runc-k8s.io-e541acb9cc210f5f907eaedec2d4064477db733c56c5418f8cf0abfec7010697-runc.aY3FlT.mount: Deactivated successfully. Feb 13 16:20:34.557845 kubelet[1777]: E0213 16:20:34.557679 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:34.661428 kernel: bpftool[3115]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 16:20:34.902056 kubelet[1777]: E0213 16:20:34.901014 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:34.949003 systemd[1]: run-containerd-runc-k8s.io-e541acb9cc210f5f907eaedec2d4064477db733c56c5418f8cf0abfec7010697-runc.vUHgVp.mount: Deactivated successfully. Feb 13 16:20:35.106985 systemd-networkd[1367]: vxlan.calico: Link UP Feb 13 16:20:35.106995 systemd-networkd[1367]: vxlan.calico: Gained carrier Feb 13 16:20:35.169723 containerd[1468]: time="2025-02-13T16:20:35.169588130Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:35.171163 containerd[1468]: time="2025-02-13T16:20:35.171112483Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 16:20:35.171853 containerd[1468]: time="2025-02-13T16:20:35.171816284Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:35.174349 containerd[1468]: time="2025-02-13T16:20:35.174306966Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:35.175029 containerd[1468]: time="2025-02-13T16:20:35.174993369Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.692293965s" Feb 13 16:20:35.175029 containerd[1468]: time="2025-02-13T16:20:35.175026384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 16:20:35.177490 systemd-networkd[1367]: cali8b948aa6fb7: Gained IPv6LL Feb 13 16:20:35.179240 containerd[1468]: time="2025-02-13T16:20:35.177979338Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 16:20:35.179361 containerd[1468]: time="2025-02-13T16:20:35.179322795Z" level=info msg="CreateContainer within sandbox \"c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 16:20:35.193486 containerd[1468]: time="2025-02-13T16:20:35.192461940Z" level=info msg="CreateContainer within sandbox \"c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"250406444238110a1fdc826e112935e6b7889aef9f9da9d07712e4fc291893f6\"" Feb 13 16:20:35.195334 containerd[1468]: time="2025-02-13T16:20:35.194312064Z" level=info msg="StartContainer for \"250406444238110a1fdc826e112935e6b7889aef9f9da9d07712e4fc291893f6\"" Feb 13 16:20:35.253525 systemd[1]: Started cri-containerd-250406444238110a1fdc826e112935e6b7889aef9f9da9d07712e4fc291893f6.scope - libcontainer container 250406444238110a1fdc826e112935e6b7889aef9f9da9d07712e4fc291893f6. Feb 13 16:20:35.298986 containerd[1468]: time="2025-02-13T16:20:35.298930892Z" level=info msg="StartContainer for \"250406444238110a1fdc826e112935e6b7889aef9f9da9d07712e4fc291893f6\" returns successfully" Feb 13 16:20:35.369603 systemd-networkd[1367]: califb3454bf7a9: Gained IPv6LL Feb 13 16:20:35.558118 kubelet[1777]: E0213 16:20:35.557959 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:36.560308 kubelet[1777]: E0213 16:20:36.558921 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:36.843355 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Feb 13 16:20:37.559308 kubelet[1777]: E0213 16:20:37.559220 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:38.031366 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2894803379.mount: Deactivated successfully. Feb 13 16:20:38.559714 kubelet[1777]: E0213 16:20:38.559618 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:39.381108 containerd[1468]: time="2025-02-13T16:20:39.381047764Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:39.382086 containerd[1468]: time="2025-02-13T16:20:39.382025147Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 16:20:39.382690 containerd[1468]: time="2025-02-13T16:20:39.382662793Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:39.386943 containerd[1468]: time="2025-02-13T16:20:39.386878284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:39.388505 containerd[1468]: time="2025-02-13T16:20:39.388350558Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.210331003s" Feb 13 16:20:39.388505 containerd[1468]: time="2025-02-13T16:20:39.388395477Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 16:20:39.391410 containerd[1468]: time="2025-02-13T16:20:39.390846849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 16:20:39.392737 containerd[1468]: time="2025-02-13T16:20:39.392693746Z" level=info msg="CreateContainer within sandbox \"718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 16:20:39.419656 containerd[1468]: time="2025-02-13T16:20:39.419497590Z" level=info msg="CreateContainer within sandbox \"718a0fd8e769570bb76183ed623aa0956a679baa472d03894435b9788bd575e2\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"2cfb7442cf51b131aaa5a3e7ab815039326ed856c8a29553fb9ddd85ec9cd43b\"" Feb 13 16:20:39.420365 containerd[1468]: time="2025-02-13T16:20:39.420335436Z" level=info msg="StartContainer for \"2cfb7442cf51b131aaa5a3e7ab815039326ed856c8a29553fb9ddd85ec9cd43b\"" Feb 13 16:20:39.461538 systemd[1]: Started cri-containerd-2cfb7442cf51b131aaa5a3e7ab815039326ed856c8a29553fb9ddd85ec9cd43b.scope - libcontainer container 2cfb7442cf51b131aaa5a3e7ab815039326ed856c8a29553fb9ddd85ec9cd43b. Feb 13 16:20:39.498900 containerd[1468]: time="2025-02-13T16:20:39.498850260Z" level=info msg="StartContainer for \"2cfb7442cf51b131aaa5a3e7ab815039326ed856c8a29553fb9ddd85ec9cd43b\" returns successfully" Feb 13 16:20:39.560681 kubelet[1777]: E0213 16:20:39.560622 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:39.965502 kubelet[1777]: I0213 16:20:39.965369 1777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-ffj9b" podStartSLOduration=9.232930807 podStartE2EDuration="14.965344395s" podCreationTimestamp="2025-02-13 16:20:25 +0000 UTC" firstStartedPulling="2025-02-13 16:20:33.658193358 +0000 UTC m=+22.769161012" lastFinishedPulling="2025-02-13 16:20:39.390606947 +0000 UTC m=+28.501574600" observedRunningTime="2025-02-13 16:20:39.965312062 +0000 UTC m=+29.076279727" watchObservedRunningTime="2025-02-13 16:20:39.965344395 +0000 UTC m=+29.076312056" Feb 13 16:20:40.561792 kubelet[1777]: E0213 16:20:40.561662 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:41.280098 containerd[1468]: time="2025-02-13T16:20:41.280036860Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:41.281397 containerd[1468]: time="2025-02-13T16:20:41.280956570Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 16:20:41.282402 containerd[1468]: time="2025-02-13T16:20:41.282366599Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:41.284983 containerd[1468]: time="2025-02-13T16:20:41.284941476Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:41.286160 containerd[1468]: time="2025-02-13T16:20:41.285877330Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.894992988s" Feb 13 16:20:41.286160 containerd[1468]: time="2025-02-13T16:20:41.285917982Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 16:20:41.289622 containerd[1468]: time="2025-02-13T16:20:41.289578802Z" level=info msg="CreateContainer within sandbox \"c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 16:20:41.312683 containerd[1468]: time="2025-02-13T16:20:41.312591044Z" level=info msg="CreateContainer within sandbox \"c168b664389bdeb8766d22f0382d059c9615dc948f563d8031a283214d2f897c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"08fbfda94c8676e4c55a7a9ceb0d36e3424935b6c84c7fdf66450cbcad411af9\"" Feb 13 16:20:41.314391 containerd[1468]: time="2025-02-13T16:20:41.313447488Z" level=info msg="StartContainer for \"08fbfda94c8676e4c55a7a9ceb0d36e3424935b6c84c7fdf66450cbcad411af9\"" Feb 13 16:20:41.356615 systemd[1]: Started cri-containerd-08fbfda94c8676e4c55a7a9ceb0d36e3424935b6c84c7fdf66450cbcad411af9.scope - libcontainer container 08fbfda94c8676e4c55a7a9ceb0d36e3424935b6c84c7fdf66450cbcad411af9. Feb 13 16:20:41.419325 containerd[1468]: time="2025-02-13T16:20:41.418319538Z" level=info msg="StartContainer for \"08fbfda94c8676e4c55a7a9ceb0d36e3424935b6c84c7fdf66450cbcad411af9\" returns successfully" Feb 13 16:20:41.562621 kubelet[1777]: E0213 16:20:41.562386 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:41.680212 kubelet[1777]: I0213 16:20:41.679541 1777 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 16:20:41.680212 kubelet[1777]: I0213 16:20:41.679587 1777 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 16:20:42.088102 kubelet[1777]: I0213 16:20:42.088024 1777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-2tkx2" podStartSLOduration=23.282646431 podStartE2EDuration="31.087997374s" podCreationTimestamp="2025-02-13 16:20:11 +0000 UTC" firstStartedPulling="2025-02-13 16:20:33.48208689 +0000 UTC m=+22.593054533" lastFinishedPulling="2025-02-13 16:20:41.287437835 +0000 UTC m=+30.398405476" observedRunningTime="2025-02-13 16:20:42.087990848 +0000 UTC m=+31.198958507" watchObservedRunningTime="2025-02-13 16:20:42.087997374 +0000 UTC m=+31.198965037" Feb 13 16:20:42.564020 kubelet[1777]: E0213 16:20:42.563848 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:43.565010 kubelet[1777]: E0213 16:20:43.564927 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:44.276360 kubelet[1777]: E0213 16:20:44.276260 1777 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 16:20:44.565961 kubelet[1777]: E0213 16:20:44.565756 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:45.566500 kubelet[1777]: E0213 16:20:45.566405 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:46.567359 kubelet[1777]: E0213 16:20:46.567288 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:47.567560 kubelet[1777]: E0213 16:20:47.567479 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:48.568191 kubelet[1777]: E0213 16:20:48.568125 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:49.568925 kubelet[1777]: E0213 16:20:49.568843 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:50.466001 systemd[1]: Created slice kubepods-besteffort-pod962f3ced_5c84_4430_8fb6_3f0aa6cbe054.slice - libcontainer container kubepods-besteffort-pod962f3ced_5c84_4430_8fb6_3f0aa6cbe054.slice. Feb 13 16:20:50.569786 kubelet[1777]: E0213 16:20:50.569718 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:50.638162 kubelet[1777]: I0213 16:20:50.638018 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/962f3ced-5c84-4430-8fb6-3f0aa6cbe054-data\") pod \"nfs-server-provisioner-0\" (UID: \"962f3ced-5c84-4430-8fb6-3f0aa6cbe054\") " pod="default/nfs-server-provisioner-0" Feb 13 16:20:50.638162 kubelet[1777]: I0213 16:20:50.638084 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jdbv\" (UniqueName: \"kubernetes.io/projected/962f3ced-5c84-4430-8fb6-3f0aa6cbe054-kube-api-access-6jdbv\") pod \"nfs-server-provisioner-0\" (UID: \"962f3ced-5c84-4430-8fb6-3f0aa6cbe054\") " pod="default/nfs-server-provisioner-0" Feb 13 16:20:50.772389 containerd[1468]: time="2025-02-13T16:20:50.772054932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:962f3ced-5c84-4430-8fb6-3f0aa6cbe054,Namespace:default,Attempt:0,}" Feb 13 16:20:51.064799 systemd-networkd[1367]: cali60e51b789ff: Link UP Feb 13 16:20:51.067023 systemd-networkd[1367]: cali60e51b789ff: Gained carrier Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.842 [INFO][3445] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.141.99-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 962f3ced-5c84-4430-8fb6-3f0aa6cbe054 1260 0 2025-02-13 16:20:50 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 146.190.141.99 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.141.99-k8s-nfs--server--provisioner--0-" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.842 [INFO][3445] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.899 [INFO][3457] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" HandleID="k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Workload="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.926 [INFO][3457] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" HandleID="k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Workload="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291580), Attrs:map[string]string{"namespace":"default", "node":"146.190.141.99", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 16:20:50.89949622 +0000 UTC"}, Hostname:"146.190.141.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.926 [INFO][3457] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.926 [INFO][3457] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.926 [INFO][3457] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.141.99' Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.948 [INFO][3457] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:50.981 [INFO][3457] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.000 [INFO][3457] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.011 [INFO][3457] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.020 [INFO][3457] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.020 [INFO][3457] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.024 [INFO][3457] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.037 [INFO][3457] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.057 [INFO][3457] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.131/26] block=192.168.52.128/26 handle="k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.057 [INFO][3457] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.131/26] handle="k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" host="146.190.141.99" Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.057 [INFO][3457] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:20:51.112555 containerd[1468]: 2025-02-13 16:20:51.057 [INFO][3457] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.131/26] IPv6=[] ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" HandleID="k8s-pod-network.f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Workload="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:20:51.113736 containerd[1468]: 2025-02-13 16:20:51.059 [INFO][3445] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.141.99-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"962f3ced-5c84-4430-8fb6-3f0aa6cbe054", ResourceVersion:"1260", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.141.99", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:20:51.113736 containerd[1468]: 2025-02-13 16:20:51.060 [INFO][3445] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.131/32] ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:20:51.113736 containerd[1468]: 2025-02-13 16:20:51.060 [INFO][3445] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:20:51.113736 containerd[1468]: 2025-02-13 16:20:51.066 [INFO][3445] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:20:51.113983 containerd[1468]: 2025-02-13 16:20:51.067 [INFO][3445] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.141.99-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"962f3ced-5c84-4430-8fb6-3f0aa6cbe054", ResourceVersion:"1260", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 20, 50, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.141.99", ContainerID:"f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.52.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"ce:df:d7:ce:d8:90", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:20:51.113983 containerd[1468]: 2025-02-13 16:20:51.110 [INFO][3445] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="146.190.141.99-k8s-nfs--server--provisioner--0-eth0" Feb 13 16:20:51.149369 containerd[1468]: time="2025-02-13T16:20:51.147600606Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:20:51.149369 containerd[1468]: time="2025-02-13T16:20:51.149092634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:20:51.149369 containerd[1468]: time="2025-02-13T16:20:51.149115031Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:51.149369 containerd[1468]: time="2025-02-13T16:20:51.149248024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:20:51.182632 systemd[1]: Started cri-containerd-f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa.scope - libcontainer container f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa. Feb 13 16:20:51.235926 containerd[1468]: time="2025-02-13T16:20:51.235881051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:962f3ced-5c84-4430-8fb6-3f0aa6cbe054,Namespace:default,Attempt:0,} returns sandbox id \"f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa\"" Feb 13 16:20:51.238544 containerd[1468]: time="2025-02-13T16:20:51.238504207Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 16:20:51.520721 kubelet[1777]: E0213 16:20:51.520627 1777 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:51.570505 kubelet[1777]: E0213 16:20:51.570381 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:51.576978 update_engine[1451]: I20250213 16:20:51.576195 1451 update_attempter.cc:509] Updating boot flags... Feb 13 16:20:51.614344 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3523) Feb 13 16:20:51.670042 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3527) Feb 13 16:20:51.763506 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (3527) Feb 13 16:20:52.571111 kubelet[1777]: E0213 16:20:52.571001 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:52.972550 systemd-networkd[1367]: cali60e51b789ff: Gained IPv6LL Feb 13 16:20:53.391854 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1022179914.mount: Deactivated successfully. Feb 13 16:20:53.572339 kubelet[1777]: E0213 16:20:53.572293 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:54.574547 kubelet[1777]: E0213 16:20:54.574494 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:55.549875 containerd[1468]: time="2025-02-13T16:20:55.549475313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:55.550651 containerd[1468]: time="2025-02-13T16:20:55.550596544Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 16:20:55.550825 containerd[1468]: time="2025-02-13T16:20:55.550797502Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:55.557720 containerd[1468]: time="2025-02-13T16:20:55.557666072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:20:55.559169 containerd[1468]: time="2025-02-13T16:20:55.559128204Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.320586675s" Feb 13 16:20:55.559169 containerd[1468]: time="2025-02-13T16:20:55.559167582Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 16:20:55.578591 kubelet[1777]: E0213 16:20:55.578359 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:55.609834 containerd[1468]: time="2025-02-13T16:20:55.609168796Z" level=info msg="CreateContainer within sandbox \"f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 16:20:55.670507 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount422602324.mount: Deactivated successfully. Feb 13 16:20:55.673935 containerd[1468]: time="2025-02-13T16:20:55.673891463Z" level=info msg="CreateContainer within sandbox \"f8a5c4c1f597261e84187ae267cef2e5b41ffb8bff05d5a184e899973134e6aa\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"4c94a9a10f2ce1cb509173b529f0d7f100185fada7408b2d991ce81ef9095c50\"" Feb 13 16:20:55.675634 containerd[1468]: time="2025-02-13T16:20:55.675451312Z" level=info msg="StartContainer for \"4c94a9a10f2ce1cb509173b529f0d7f100185fada7408b2d991ce81ef9095c50\"" Feb 13 16:20:55.749696 systemd[1]: Started cri-containerd-4c94a9a10f2ce1cb509173b529f0d7f100185fada7408b2d991ce81ef9095c50.scope - libcontainer container 4c94a9a10f2ce1cb509173b529f0d7f100185fada7408b2d991ce81ef9095c50. Feb 13 16:20:55.795131 containerd[1468]: time="2025-02-13T16:20:55.795072537Z" level=info msg="StartContainer for \"4c94a9a10f2ce1cb509173b529f0d7f100185fada7408b2d991ce81ef9095c50\" returns successfully" Feb 13 16:20:56.138643 kubelet[1777]: I0213 16:20:56.138313 1777 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.797907178 podStartE2EDuration="6.138280452s" podCreationTimestamp="2025-02-13 16:20:50 +0000 UTC" firstStartedPulling="2025-02-13 16:20:51.238118369 +0000 UTC m=+40.349086010" lastFinishedPulling="2025-02-13 16:20:55.578491636 +0000 UTC m=+44.689459284" observedRunningTime="2025-02-13 16:20:56.134378086 +0000 UTC m=+45.245345747" watchObservedRunningTime="2025-02-13 16:20:56.138280452 +0000 UTC m=+45.249248108" Feb 13 16:20:56.579487 kubelet[1777]: E0213 16:20:56.578948 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:57.579530 kubelet[1777]: E0213 16:20:57.579459 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:58.579662 kubelet[1777]: E0213 16:20:58.579612 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:20:59.580181 kubelet[1777]: E0213 16:20:59.580116 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:00.581045 kubelet[1777]: E0213 16:21:00.580956 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:01.584434 kubelet[1777]: E0213 16:21:01.581125 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:02.581859 kubelet[1777]: E0213 16:21:02.581791 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:03.582329 kubelet[1777]: E0213 16:21:03.582238 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:04.583449 kubelet[1777]: E0213 16:21:04.583386 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:05.585023 kubelet[1777]: E0213 16:21:05.584933 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:05.608834 systemd[1]: Created slice kubepods-besteffort-pod999ad88c_aff0_4686_8e4f_718101f10982.slice - libcontainer container kubepods-besteffort-pod999ad88c_aff0_4686_8e4f_718101f10982.slice. Feb 13 16:21:05.762870 kubelet[1777]: I0213 16:21:05.762803 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-0122c032-a962-44a8-afe1-7450ca5322d5\" (UniqueName: \"kubernetes.io/nfs/999ad88c-aff0-4686-8e4f-718101f10982-pvc-0122c032-a962-44a8-afe1-7450ca5322d5\") pod \"test-pod-1\" (UID: \"999ad88c-aff0-4686-8e4f-718101f10982\") " pod="default/test-pod-1" Feb 13 16:21:05.763341 kubelet[1777]: I0213 16:21:05.763251 1777 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkbh4\" (UniqueName: \"kubernetes.io/projected/999ad88c-aff0-4686-8e4f-718101f10982-kube-api-access-tkbh4\") pod \"test-pod-1\" (UID: \"999ad88c-aff0-4686-8e4f-718101f10982\") " pod="default/test-pod-1" Feb 13 16:21:05.911319 kernel: FS-Cache: Loaded Feb 13 16:21:05.998781 kernel: RPC: Registered named UNIX socket transport module. Feb 13 16:21:05.999005 kernel: RPC: Registered udp transport module. Feb 13 16:21:05.999045 kernel: RPC: Registered tcp transport module. Feb 13 16:21:05.999074 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 16:21:05.999710 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 16:21:06.414436 kernel: NFS: Registering the id_resolver key type Feb 13 16:21:06.416347 kernel: Key type id_resolver registered Feb 13 16:21:06.418441 kernel: Key type id_legacy registered Feb 13 16:21:06.466030 nfsidmap[3663]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-4-8892aa3964' Feb 13 16:21:06.477668 nfsidmap[3664]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.1-4-8892aa3964' Feb 13 16:21:06.513933 containerd[1468]: time="2025-02-13T16:21:06.513501817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:999ad88c-aff0-4686-8e4f-718101f10982,Namespace:default,Attempt:0,}" Feb 13 16:21:06.585976 kubelet[1777]: E0213 16:21:06.585867 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:06.893476 systemd-networkd[1367]: cali5ec59c6bf6e: Link UP Feb 13 16:21:06.893858 systemd-networkd[1367]: cali5ec59c6bf6e: Gained carrier Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.603 [INFO][3665] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {146.190.141.99-k8s-test--pod--1-eth0 default 999ad88c-aff0-4686-8e4f-718101f10982 1351 0 2025-02-13 16:20:51 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 146.190.141.99 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.141.99-k8s-test--pod--1-" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.603 [INFO][3665] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.141.99-k8s-test--pod--1-eth0" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.649 [INFO][3676] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" HandleID="k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Workload="146.190.141.99-k8s-test--pod--1-eth0" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.672 [INFO][3676] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" HandleID="k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Workload="146.190.141.99-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003355c0), Attrs:map[string]string{"namespace":"default", "node":"146.190.141.99", "pod":"test-pod-1", "timestamp":"2025-02-13 16:21:06.649350132 +0000 UTC"}, Hostname:"146.190.141.99", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.672 [INFO][3676] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.672 [INFO][3676] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.672 [INFO][3676] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '146.190.141.99' Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.693 [INFO][3676] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.771 [INFO][3676] ipam/ipam.go 372: Looking up existing affinities for host host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.797 [INFO][3676] ipam/ipam.go 489: Trying affinity for 192.168.52.128/26 host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.803 [INFO][3676] ipam/ipam.go 155: Attempting to load block cidr=192.168.52.128/26 host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.817 [INFO][3676] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.52.128/26 host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.817 [INFO][3676] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.52.128/26 handle="k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.849 [INFO][3676] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6 Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.863 [INFO][3676] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.52.128/26 handle="k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.886 [INFO][3676] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.52.132/26] block=192.168.52.128/26 handle="k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.887 [INFO][3676] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.52.132/26] handle="k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" host="146.190.141.99" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.887 [INFO][3676] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.887 [INFO][3676] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.52.132/26] IPv6=[] ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" HandleID="k8s-pod-network.855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Workload="146.190.141.99-k8s-test--pod--1-eth0" Feb 13 16:21:06.936028 containerd[1468]: 2025-02-13 16:21:06.889 [INFO][3665] cni-plugin/k8s.go 386: Populated endpoint ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.141.99-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.141.99-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"999ad88c-aff0-4686-8e4f-718101f10982", ResourceVersion:"1351", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.141.99", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:21:06.939224 containerd[1468]: 2025-02-13 16:21:06.889 [INFO][3665] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.52.132/32] ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.141.99-k8s-test--pod--1-eth0" Feb 13 16:21:06.939224 containerd[1468]: 2025-02-13 16:21:06.889 [INFO][3665] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.141.99-k8s-test--pod--1-eth0" Feb 13 16:21:06.939224 containerd[1468]: 2025-02-13 16:21:06.893 [INFO][3665] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.141.99-k8s-test--pod--1-eth0" Feb 13 16:21:06.939224 containerd[1468]: 2025-02-13 16:21:06.895 [INFO][3665] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.141.99-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"146.190.141.99-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"999ad88c-aff0-4686-8e4f-718101f10982", ResourceVersion:"1351", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 16, 20, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"146.190.141.99", ContainerID:"855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.52.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"6e:43:2a:b8:33:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 16:21:06.939224 containerd[1468]: 2025-02-13 16:21:06.930 [INFO][3665] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="146.190.141.99-k8s-test--pod--1-eth0" Feb 13 16:21:06.985397 containerd[1468]: time="2025-02-13T16:21:06.984443591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 16:21:06.985397 containerd[1468]: time="2025-02-13T16:21:06.984517211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 16:21:06.985397 containerd[1468]: time="2025-02-13T16:21:06.984529139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:21:06.985397 containerd[1468]: time="2025-02-13T16:21:06.984666697Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 16:21:07.026742 systemd[1]: Started cri-containerd-855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6.scope - libcontainer container 855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6. Feb 13 16:21:07.093841 containerd[1468]: time="2025-02-13T16:21:07.093679907Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:999ad88c-aff0-4686-8e4f-718101f10982,Namespace:default,Attempt:0,} returns sandbox id \"855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6\"" Feb 13 16:21:07.097313 containerd[1468]: time="2025-02-13T16:21:07.097107115Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 16:21:07.549380 containerd[1468]: time="2025-02-13T16:21:07.549309175Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 16:21:07.550399 containerd[1468]: time="2025-02-13T16:21:07.550336712Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 16:21:07.555112 containerd[1468]: time="2025-02-13T16:21:07.554832740Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 457.674845ms" Feb 13 16:21:07.555112 containerd[1468]: time="2025-02-13T16:21:07.554889899Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 16:21:07.559116 containerd[1468]: time="2025-02-13T16:21:07.558888005Z" level=info msg="CreateContainer within sandbox \"855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 16:21:07.580571 containerd[1468]: time="2025-02-13T16:21:07.580143181Z" level=info msg="CreateContainer within sandbox \"855d54cd375399b73022fe219ffadb100f930cdda64d0d6df5134686212230a6\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"8f411a08e9193b2f407a292b6b9e0ef562336bdca8fd1205b49987d6251876ad\"" Feb 13 16:21:07.581491 containerd[1468]: time="2025-02-13T16:21:07.581383867Z" level=info msg="StartContainer for \"8f411a08e9193b2f407a292b6b9e0ef562336bdca8fd1205b49987d6251876ad\"" Feb 13 16:21:07.587199 kubelet[1777]: E0213 16:21:07.586795 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:07.647803 systemd[1]: Started cri-containerd-8f411a08e9193b2f407a292b6b9e0ef562336bdca8fd1205b49987d6251876ad.scope - libcontainer container 8f411a08e9193b2f407a292b6b9e0ef562336bdca8fd1205b49987d6251876ad. Feb 13 16:21:07.695904 containerd[1468]: time="2025-02-13T16:21:07.695744933Z" level=info msg="StartContainer for \"8f411a08e9193b2f407a292b6b9e0ef562336bdca8fd1205b49987d6251876ad\" returns successfully" Feb 13 16:21:08.587652 kubelet[1777]: E0213 16:21:08.587555 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:08.814359 systemd-networkd[1367]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 16:21:09.588392 kubelet[1777]: E0213 16:21:09.588211 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:10.589046 kubelet[1777]: E0213 16:21:10.588977 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:11.520649 kubelet[1777]: E0213 16:21:11.520571 1777 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:11.578772 containerd[1468]: time="2025-02-13T16:21:11.578733033Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:21:11.579471 containerd[1468]: time="2025-02-13T16:21:11.578884065Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:21:11.579471 containerd[1468]: time="2025-02-13T16:21:11.578896121Z" level=info msg="StopPodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:21:11.584560 containerd[1468]: time="2025-02-13T16:21:11.584497577Z" level=info msg="RemovePodSandbox for \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:21:11.590764 kubelet[1777]: E0213 16:21:11.590708 1777 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 16:21:11.591817 containerd[1468]: time="2025-02-13T16:21:11.591716474Z" level=info msg="Forcibly stopping sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\"" Feb 13 16:21:11.591962 containerd[1468]: time="2025-02-13T16:21:11.591890618Z" level=info msg="TearDown network for sandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" successfully" Feb 13 16:21:11.609775 containerd[1468]: time="2025-02-13T16:21:11.609699746Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.609939 containerd[1468]: time="2025-02-13T16:21:11.609836832Z" level=info msg="RemovePodSandbox \"613977172e3b6e5b9170fe9dbb7a3e6916ed20a773f6ffe12c1aa7c3af75c1ff\" returns successfully" Feb 13 16:21:11.611742 containerd[1468]: time="2025-02-13T16:21:11.610575909Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:21:11.611742 containerd[1468]: time="2025-02-13T16:21:11.610741120Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:21:11.611742 containerd[1468]: time="2025-02-13T16:21:11.610759322Z" level=info msg="StopPodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:21:11.612458 containerd[1468]: time="2025-02-13T16:21:11.612422260Z" level=info msg="RemovePodSandbox for \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:21:11.612546 containerd[1468]: time="2025-02-13T16:21:11.612487121Z" level=info msg="Forcibly stopping sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\"" Feb 13 16:21:11.612664 containerd[1468]: time="2025-02-13T16:21:11.612588885Z" level=info msg="TearDown network for sandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" successfully" Feb 13 16:21:11.615163 containerd[1468]: time="2025-02-13T16:21:11.615101880Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.615419 containerd[1468]: time="2025-02-13T16:21:11.615186661Z" level=info msg="RemovePodSandbox \"2ba7ed0ce9016048fae37c48969f0ca85bfd4730ba82e1f3ea9ed27c3550356d\" returns successfully" Feb 13 16:21:11.616072 containerd[1468]: time="2025-02-13T16:21:11.616016063Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:21:11.616190 containerd[1468]: time="2025-02-13T16:21:11.616176657Z" level=info msg="TearDown network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" successfully" Feb 13 16:21:11.616239 containerd[1468]: time="2025-02-13T16:21:11.616193731Z" level=info msg="StopPodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" returns successfully" Feb 13 16:21:11.616769 containerd[1468]: time="2025-02-13T16:21:11.616737822Z" level=info msg="RemovePodSandbox for \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:21:11.616769 containerd[1468]: time="2025-02-13T16:21:11.616776537Z" level=info msg="Forcibly stopping sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\"" Feb 13 16:21:11.616950 containerd[1468]: time="2025-02-13T16:21:11.616885521Z" level=info msg="TearDown network for sandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" successfully" Feb 13 16:21:11.620946 containerd[1468]: time="2025-02-13T16:21:11.620852490Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.620946 containerd[1468]: time="2025-02-13T16:21:11.620928983Z" level=info msg="RemovePodSandbox \"c43870bcade70f08d3f38c52fd559a83579f7ef3c7b0d2550a3102bc13421a51\" returns successfully" Feb 13 16:21:11.621961 containerd[1468]: time="2025-02-13T16:21:11.621878107Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" Feb 13 16:21:11.622146 containerd[1468]: time="2025-02-13T16:21:11.622110504Z" level=info msg="TearDown network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" successfully" Feb 13 16:21:11.622146 containerd[1468]: time="2025-02-13T16:21:11.622136414Z" level=info msg="StopPodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" returns successfully" Feb 13 16:21:11.622769 containerd[1468]: time="2025-02-13T16:21:11.622699167Z" level=info msg="RemovePodSandbox for \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" Feb 13 16:21:11.622769 containerd[1468]: time="2025-02-13T16:21:11.622741352Z" level=info msg="Forcibly stopping sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\"" Feb 13 16:21:11.623034 containerd[1468]: time="2025-02-13T16:21:11.622849632Z" level=info msg="TearDown network for sandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" successfully" Feb 13 16:21:11.626050 containerd[1468]: time="2025-02-13T16:21:11.625981848Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.626253 containerd[1468]: time="2025-02-13T16:21:11.626063808Z" level=info msg="RemovePodSandbox \"734beb6f31192e9c686f25635dd8565bc5ee8615abe921bcfde90fedcaf1351a\" returns successfully" Feb 13 16:21:11.626937 containerd[1468]: time="2025-02-13T16:21:11.626816837Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\"" Feb 13 16:21:11.627018 containerd[1468]: time="2025-02-13T16:21:11.626968776Z" level=info msg="TearDown network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" successfully" Feb 13 16:21:11.627018 containerd[1468]: time="2025-02-13T16:21:11.626985969Z" level=info msg="StopPodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" returns successfully" Feb 13 16:21:11.627586 containerd[1468]: time="2025-02-13T16:21:11.627545279Z" level=info msg="RemovePodSandbox for \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\"" Feb 13 16:21:11.627586 containerd[1468]: time="2025-02-13T16:21:11.627584793Z" level=info msg="Forcibly stopping sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\"" Feb 13 16:21:11.627736 containerd[1468]: time="2025-02-13T16:21:11.627683902Z" level=info msg="TearDown network for sandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" successfully" Feb 13 16:21:11.630455 containerd[1468]: time="2025-02-13T16:21:11.630383185Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.630581 containerd[1468]: time="2025-02-13T16:21:11.630469289Z" level=info msg="RemovePodSandbox \"6802ea10ff859deff17a9f0915adef9b72da232cd12ff3efd891a4a9d12617d0\" returns successfully" Feb 13 16:21:11.631028 containerd[1468]: time="2025-02-13T16:21:11.630982647Z" level=info msg="StopPodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\"" Feb 13 16:21:11.631173 containerd[1468]: time="2025-02-13T16:21:11.631122855Z" level=info msg="TearDown network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" successfully" Feb 13 16:21:11.631173 containerd[1468]: time="2025-02-13T16:21:11.631144853Z" level=info msg="StopPodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" returns successfully" Feb 13 16:21:11.633043 containerd[1468]: time="2025-02-13T16:21:11.631716903Z" level=info msg="RemovePodSandbox for \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\"" Feb 13 16:21:11.633043 containerd[1468]: time="2025-02-13T16:21:11.631750789Z" level=info msg="Forcibly stopping sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\"" Feb 13 16:21:11.633043 containerd[1468]: time="2025-02-13T16:21:11.631827348Z" level=info msg="TearDown network for sandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" successfully" Feb 13 16:21:11.634449 containerd[1468]: time="2025-02-13T16:21:11.634399737Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.634649 containerd[1468]: time="2025-02-13T16:21:11.634630506Z" level=info msg="RemovePodSandbox \"24aae865edf23d54e958f1a116534a116c01b30f8da942a8bdcd84b554ac02ce\" returns successfully" Feb 13 16:21:11.635522 containerd[1468]: time="2025-02-13T16:21:11.635477613Z" level=info msg="StopPodSandbox for \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\"" Feb 13 16:21:11.635666 containerd[1468]: time="2025-02-13T16:21:11.635620179Z" level=info msg="TearDown network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\" successfully" Feb 13 16:21:11.635723 containerd[1468]: time="2025-02-13T16:21:11.635661964Z" level=info msg="StopPodSandbox for \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\" returns successfully" Feb 13 16:21:11.637341 containerd[1468]: time="2025-02-13T16:21:11.636373811Z" level=info msg="RemovePodSandbox for \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\"" Feb 13 16:21:11.637341 containerd[1468]: time="2025-02-13T16:21:11.636413173Z" level=info msg="Forcibly stopping sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\"" Feb 13 16:21:11.637341 containerd[1468]: time="2025-02-13T16:21:11.636519102Z" level=info msg="TearDown network for sandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\" successfully" Feb 13 16:21:11.638870 containerd[1468]: time="2025-02-13T16:21:11.638809885Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.638870 containerd[1468]: time="2025-02-13T16:21:11.638876649Z" level=info msg="RemovePodSandbox \"16104b3f36bd9edcd2b0081ef3846f54188478a6c44945090400f5359bbd9b1e\" returns successfully" Feb 13 16:21:11.640343 containerd[1468]: time="2025-02-13T16:21:11.639844052Z" level=info msg="StopPodSandbox for \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\"" Feb 13 16:21:11.640343 containerd[1468]: time="2025-02-13T16:21:11.639984312Z" level=info msg="TearDown network for sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\" successfully" Feb 13 16:21:11.640343 containerd[1468]: time="2025-02-13T16:21:11.640003165Z" level=info msg="StopPodSandbox for \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\" returns successfully" Feb 13 16:21:11.640921 containerd[1468]: time="2025-02-13T16:21:11.640895359Z" level=info msg="RemovePodSandbox for \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\"" Feb 13 16:21:11.641067 containerd[1468]: time="2025-02-13T16:21:11.641043041Z" level=info msg="Forcibly stopping sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\"" Feb 13 16:21:11.641401 containerd[1468]: time="2025-02-13T16:21:11.641229752Z" level=info msg="TearDown network for sandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\" successfully" Feb 13 16:21:11.644356 containerd[1468]: time="2025-02-13T16:21:11.644286822Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.644481 containerd[1468]: time="2025-02-13T16:21:11.644369036Z" level=info msg="RemovePodSandbox \"5b939ba451f3ddddad16123d74763b4b35c4080e0f768db575d065d5dbed339f\" returns successfully" Feb 13 16:21:11.645116 containerd[1468]: time="2025-02-13T16:21:11.644915956Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:21:11.645116 containerd[1468]: time="2025-02-13T16:21:11.645035937Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:21:11.645116 containerd[1468]: time="2025-02-13T16:21:11.645049375Z" level=info msg="StopPodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:21:11.645911 containerd[1468]: time="2025-02-13T16:21:11.645640635Z" level=info msg="RemovePodSandbox for \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:21:11.645911 containerd[1468]: time="2025-02-13T16:21:11.645681353Z" level=info msg="Forcibly stopping sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\"" Feb 13 16:21:11.645911 containerd[1468]: time="2025-02-13T16:21:11.645775063Z" level=info msg="TearDown network for sandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" successfully" Feb 13 16:21:11.648627 containerd[1468]: time="2025-02-13T16:21:11.648470906Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.648627 containerd[1468]: time="2025-02-13T16:21:11.648530039Z" level=info msg="RemovePodSandbox \"3f3546388bfc3a39b64c3decef81428518c51260b78591eea1db426a73c657bc\" returns successfully" Feb 13 16:21:11.649768 containerd[1468]: time="2025-02-13T16:21:11.649352816Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:21:11.649768 containerd[1468]: time="2025-02-13T16:21:11.649485558Z" level=info msg="TearDown network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" successfully" Feb 13 16:21:11.649768 containerd[1468]: time="2025-02-13T16:21:11.649501058Z" level=info msg="StopPodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" returns successfully" Feb 13 16:21:11.649919 containerd[1468]: time="2025-02-13T16:21:11.649850424Z" level=info msg="RemovePodSandbox for \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:21:11.649919 containerd[1468]: time="2025-02-13T16:21:11.649879090Z" level=info msg="Forcibly stopping sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\"" Feb 13 16:21:11.650021 containerd[1468]: time="2025-02-13T16:21:11.649967244Z" level=info msg="TearDown network for sandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" successfully" Feb 13 16:21:11.653128 containerd[1468]: time="2025-02-13T16:21:11.653073329Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.653422 containerd[1468]: time="2025-02-13T16:21:11.653160875Z" level=info msg="RemovePodSandbox \"eb321de64d01befa6251e4b2f21589949b7a30e6252f7a73010cc23e201ca1d8\" returns successfully" Feb 13 16:21:11.654315 containerd[1468]: time="2025-02-13T16:21:11.653835208Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" Feb 13 16:21:11.654315 containerd[1468]: time="2025-02-13T16:21:11.653975116Z" level=info msg="TearDown network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" successfully" Feb 13 16:21:11.654315 containerd[1468]: time="2025-02-13T16:21:11.653986873Z" level=info msg="StopPodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" returns successfully" Feb 13 16:21:11.654886 containerd[1468]: time="2025-02-13T16:21:11.654700687Z" level=info msg="RemovePodSandbox for \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" Feb 13 16:21:11.654886 containerd[1468]: time="2025-02-13T16:21:11.654738614Z" level=info msg="Forcibly stopping sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\"" Feb 13 16:21:11.654886 containerd[1468]: time="2025-02-13T16:21:11.654807861Z" level=info msg="TearDown network for sandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" successfully" Feb 13 16:21:11.657536 containerd[1468]: time="2025-02-13T16:21:11.657471249Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.657937 containerd[1468]: time="2025-02-13T16:21:11.657716542Z" level=info msg="RemovePodSandbox \"be89c3b11c4a10080d447e2be30f39284bf33215f9e1b46202b6f4701cc5e8c4\" returns successfully" Feb 13 16:21:11.659094 containerd[1468]: time="2025-02-13T16:21:11.658645878Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\"" Feb 13 16:21:11.659094 containerd[1468]: time="2025-02-13T16:21:11.658776499Z" level=info msg="TearDown network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" successfully" Feb 13 16:21:11.659094 containerd[1468]: time="2025-02-13T16:21:11.658790970Z" level=info msg="StopPodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" returns successfully" Feb 13 16:21:11.659310 containerd[1468]: time="2025-02-13T16:21:11.659212909Z" level=info msg="RemovePodSandbox for \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\"" Feb 13 16:21:11.660051 containerd[1468]: time="2025-02-13T16:21:11.659353234Z" level=info msg="Forcibly stopping sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\"" Feb 13 16:21:11.660051 containerd[1468]: time="2025-02-13T16:21:11.659434627Z" level=info msg="TearDown network for sandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" successfully" Feb 13 16:21:11.661543 containerd[1468]: time="2025-02-13T16:21:11.661283137Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.661543 containerd[1468]: time="2025-02-13T16:21:11.661331564Z" level=info msg="RemovePodSandbox \"dcadda5e14f40a3626ac0a163cb613c5d3ce97a541f42fc486c2c161d79079d2\" returns successfully" Feb 13 16:21:11.662319 containerd[1468]: time="2025-02-13T16:21:11.662122153Z" level=info msg="StopPodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\"" Feb 13 16:21:11.662319 containerd[1468]: time="2025-02-13T16:21:11.662245813Z" level=info msg="TearDown network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" successfully" Feb 13 16:21:11.662319 containerd[1468]: time="2025-02-13T16:21:11.662282261Z" level=info msg="StopPodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" returns successfully" Feb 13 16:21:11.662777 containerd[1468]: time="2025-02-13T16:21:11.662704139Z" level=info msg="RemovePodSandbox for \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\"" Feb 13 16:21:11.662777 containerd[1468]: time="2025-02-13T16:21:11.662734638Z" level=info msg="Forcibly stopping sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\"" Feb 13 16:21:11.662857 containerd[1468]: time="2025-02-13T16:21:11.662819786Z" level=info msg="TearDown network for sandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" successfully" Feb 13 16:21:11.666350 containerd[1468]: time="2025-02-13T16:21:11.666237777Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.666350 containerd[1468]: time="2025-02-13T16:21:11.666320338Z" level=info msg="RemovePodSandbox \"88d039cb5501062b0d890c8e041305954bed80c0b95e58f0138d902284375cc5\" returns successfully" Feb 13 16:21:11.668299 containerd[1468]: time="2025-02-13T16:21:11.667812931Z" level=info msg="StopPodSandbox for \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\"" Feb 13 16:21:11.668299 containerd[1468]: time="2025-02-13T16:21:11.667959053Z" level=info msg="TearDown network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\" successfully" Feb 13 16:21:11.668299 containerd[1468]: time="2025-02-13T16:21:11.667980267Z" level=info msg="StopPodSandbox for \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\" returns successfully" Feb 13 16:21:11.669301 containerd[1468]: time="2025-02-13T16:21:11.668720511Z" level=info msg="RemovePodSandbox for \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\"" Feb 13 16:21:11.669301 containerd[1468]: time="2025-02-13T16:21:11.668763048Z" level=info msg="Forcibly stopping sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\"" Feb 13 16:21:11.669301 containerd[1468]: time="2025-02-13T16:21:11.668860880Z" level=info msg="TearDown network for sandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\" successfully" Feb 13 16:21:11.672558 containerd[1468]: time="2025-02-13T16:21:11.672518545Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.672783 containerd[1468]: time="2025-02-13T16:21:11.672708690Z" level=info msg="RemovePodSandbox \"fc9f595b221b02190941c881eb305c8d337ab9314b133c0f4e9f55be7aa8a37c\" returns successfully" Feb 13 16:21:11.673073 containerd[1468]: time="2025-02-13T16:21:11.673034853Z" level=info msg="StopPodSandbox for \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\"" Feb 13 16:21:11.673440 containerd[1468]: time="2025-02-13T16:21:11.673165429Z" level=info msg="TearDown network for sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\" successfully" Feb 13 16:21:11.673440 containerd[1468]: time="2025-02-13T16:21:11.673181359Z" level=info msg="StopPodSandbox for \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\" returns successfully" Feb 13 16:21:11.674118 containerd[1468]: time="2025-02-13T16:21:11.674089781Z" level=info msg="RemovePodSandbox for \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\"" Feb 13 16:21:11.674118 containerd[1468]: time="2025-02-13T16:21:11.674117264Z" level=info msg="Forcibly stopping sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\"" Feb 13 16:21:11.675531 containerd[1468]: time="2025-02-13T16:21:11.675435970Z" level=info msg="TearDown network for sandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\" successfully" Feb 13 16:21:11.679608 containerd[1468]: time="2025-02-13T16:21:11.679552042Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 16:21:11.679608 containerd[1468]: time="2025-02-13T16:21:11.679630520Z" level=info msg="RemovePodSandbox \"861d58bbded04770fa3a08977c34b3e457938bc42c50b17a324c08a6604b644f\" returns successfully"