Jan 29 11:15:00.024548 kernel: Linux version 6.6.71-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241116 p3) 14.2.1 20241116, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Jan 13 18:58:40 -00 2025 Jan 29 11:15:00.024588 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:15:00.024601 kernel: BIOS-provided physical RAM map: Jan 29 11:15:00.024608 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Jan 29 11:15:00.024615 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Jan 29 11:15:00.024622 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Jan 29 11:15:00.024630 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffd7fff] usable Jan 29 11:15:00.024637 kernel: BIOS-e820: [mem 0x000000007ffd8000-0x000000007fffffff] reserved Jan 29 11:15:00.024644 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Jan 29 11:15:00.024651 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Jan 29 11:15:00.024661 kernel: NX (Execute Disable) protection: active Jan 29 11:15:00.024668 kernel: APIC: Static calls initialized Jan 29 11:15:00.024675 kernel: SMBIOS 2.8 present. Jan 29 11:15:00.024682 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Jan 29 11:15:00.024691 kernel: Hypervisor detected: KVM Jan 29 11:15:00.026683 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jan 29 11:15:00.026752 kernel: kvm-clock: using sched offset of 3604513904 cycles Jan 29 11:15:00.026769 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jan 29 11:15:00.026782 kernel: tsc: Detected 2494.138 MHz processor Jan 29 11:15:00.026795 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jan 29 11:15:00.026808 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jan 29 11:15:00.026818 kernel: last_pfn = 0x7ffd8 max_arch_pfn = 0x400000000 Jan 29 11:15:00.026830 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Jan 29 11:15:00.026842 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jan 29 11:15:00.026858 kernel: ACPI: Early table checksum verification disabled Jan 29 11:15:00.026866 kernel: ACPI: RSDP 0x00000000000F5A50 000014 (v00 BOCHS ) Jan 29 11:15:00.026874 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:15:00.026883 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:15:00.026891 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:15:00.026899 kernel: ACPI: FACS 0x000000007FFE0000 000040 Jan 29 11:15:00.026907 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:15:00.026915 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:15:00.026923 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:15:00.026934 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 29 11:15:00.026942 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Jan 29 11:15:00.026950 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Jan 29 11:15:00.026957 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Jan 29 11:15:00.026965 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Jan 29 11:15:00.026973 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Jan 29 11:15:00.026981 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Jan 29 11:15:00.026996 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Jan 29 11:15:00.027004 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jan 29 11:15:00.027012 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jan 29 11:15:00.027021 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jan 29 11:15:00.027029 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Jan 29 11:15:00.027048 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffd7fff] -> [mem 0x00000000-0x7ffd7fff] Jan 29 11:15:00.027056 kernel: NODE_DATA(0) allocated [mem 0x7ffd2000-0x7ffd7fff] Jan 29 11:15:00.027069 kernel: Zone ranges: Jan 29 11:15:00.027080 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jan 29 11:15:00.027088 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffd7fff] Jan 29 11:15:00.027097 kernel: Normal empty Jan 29 11:15:00.027105 kernel: Movable zone start for each node Jan 29 11:15:00.027113 kernel: Early memory node ranges Jan 29 11:15:00.027122 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Jan 29 11:15:00.027130 kernel: node 0: [mem 0x0000000000100000-0x000000007ffd7fff] Jan 29 11:15:00.027138 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffd7fff] Jan 29 11:15:00.027151 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jan 29 11:15:00.027162 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Jan 29 11:15:00.027171 kernel: On node 0, zone DMA32: 40 pages in unavailable ranges Jan 29 11:15:00.027179 kernel: ACPI: PM-Timer IO Port: 0x608 Jan 29 11:15:00.027187 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jan 29 11:15:00.027196 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jan 29 11:15:00.027205 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Jan 29 11:15:00.027213 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jan 29 11:15:00.027221 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jan 29 11:15:00.027230 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jan 29 11:15:00.027241 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jan 29 11:15:00.027249 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jan 29 11:15:00.027257 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Jan 29 11:15:00.027266 kernel: TSC deadline timer available Jan 29 11:15:00.027274 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jan 29 11:15:00.027282 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Jan 29 11:15:00.027291 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Jan 29 11:15:00.027303 kernel: Booting paravirtualized kernel on KVM Jan 29 11:15:00.027312 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jan 29 11:15:00.027324 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jan 29 11:15:00.027332 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Jan 29 11:15:00.027341 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Jan 29 11:15:00.027349 kernel: pcpu-alloc: [0] 0 1 Jan 29 11:15:00.027357 kernel: kvm-guest: PV spinlocks disabled, no host support Jan 29 11:15:00.027367 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:15:00.027376 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jan 29 11:15:00.027384 kernel: random: crng init done Jan 29 11:15:00.027425 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 29 11:15:00.027433 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 29 11:15:00.027441 kernel: Fallback order for Node 0: 0 Jan 29 11:15:00.027450 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515800 Jan 29 11:15:00.027458 kernel: Policy zone: DMA32 Jan 29 11:15:00.027466 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 29 11:15:00.027475 kernel: Memory: 1969144K/2096600K available (14336K kernel code, 2299K rwdata, 22800K rodata, 43320K init, 1756K bss, 127196K reserved, 0K cma-reserved) Jan 29 11:15:00.027484 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 29 11:15:00.027498 kernel: Kernel/User page tables isolation: enabled Jan 29 11:15:00.027506 kernel: ftrace: allocating 37890 entries in 149 pages Jan 29 11:15:00.027515 kernel: ftrace: allocated 149 pages with 4 groups Jan 29 11:15:00.027523 kernel: Dynamic Preempt: voluntary Jan 29 11:15:00.027531 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 29 11:15:00.027542 kernel: rcu: RCU event tracing is enabled. Jan 29 11:15:00.027550 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 29 11:15:00.027559 kernel: Trampoline variant of Tasks RCU enabled. Jan 29 11:15:00.027570 kernel: Rude variant of Tasks RCU enabled. Jan 29 11:15:00.027579 kernel: Tracing variant of Tasks RCU enabled. Jan 29 11:15:00.027590 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 29 11:15:00.027599 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 29 11:15:00.027607 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jan 29 11:15:00.027615 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 29 11:15:00.027624 kernel: Console: colour VGA+ 80x25 Jan 29 11:15:00.027632 kernel: printk: console [tty0] enabled Jan 29 11:15:00.027640 kernel: printk: console [ttyS0] enabled Jan 29 11:15:00.027649 kernel: ACPI: Core revision 20230628 Jan 29 11:15:00.027658 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Jan 29 11:15:00.027673 kernel: APIC: Switch to symmetric I/O mode setup Jan 29 11:15:00.027682 kernel: x2apic enabled Jan 29 11:15:00.027690 kernel: APIC: Switched APIC routing to: physical x2apic Jan 29 11:15:00.027715 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Jan 29 11:15:00.027725 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 29 11:15:00.027733 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Jan 29 11:15:00.027741 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Jan 29 11:15:00.027750 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Jan 29 11:15:00.027772 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jan 29 11:15:00.027781 kernel: Spectre V2 : Mitigation: Retpolines Jan 29 11:15:00.027790 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jan 29 11:15:00.027802 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jan 29 11:15:00.027811 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Jan 29 11:15:00.027820 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jan 29 11:15:00.027829 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Jan 29 11:15:00.027839 kernel: MDS: Mitigation: Clear CPU buffers Jan 29 11:15:00.027848 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jan 29 11:15:00.027863 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jan 29 11:15:00.027873 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jan 29 11:15:00.027881 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jan 29 11:15:00.027890 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jan 29 11:15:00.027899 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jan 29 11:15:00.027908 kernel: Freeing SMP alternatives memory: 32K Jan 29 11:15:00.027917 kernel: pid_max: default: 32768 minimum: 301 Jan 29 11:15:00.027926 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 29 11:15:00.027938 kernel: landlock: Up and running. Jan 29 11:15:00.027947 kernel: SELinux: Initializing. Jan 29 11:15:00.027955 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:15:00.027964 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Jan 29 11:15:00.027976 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Jan 29 11:15:00.027985 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:15:00.027994 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:15:00.028003 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 29 11:15:00.028012 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Jan 29 11:15:00.028024 kernel: signal: max sigframe size: 1776 Jan 29 11:15:00.028033 kernel: rcu: Hierarchical SRCU implementation. Jan 29 11:15:00.028045 kernel: rcu: Max phase no-delay instances is 400. Jan 29 11:15:00.028054 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jan 29 11:15:00.028065 kernel: smp: Bringing up secondary CPUs ... Jan 29 11:15:00.028077 kernel: smpboot: x86: Booting SMP configuration: Jan 29 11:15:00.028089 kernel: .... node #0, CPUs: #1 Jan 29 11:15:00.028101 kernel: smp: Brought up 1 node, 2 CPUs Jan 29 11:15:00.028115 kernel: smpboot: Max logical packages: 1 Jan 29 11:15:00.028128 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Jan 29 11:15:00.028141 kernel: devtmpfs: initialized Jan 29 11:15:00.028153 kernel: x86/mm: Memory block size: 128MB Jan 29 11:15:00.028162 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 29 11:15:00.028171 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 29 11:15:00.028181 kernel: pinctrl core: initialized pinctrl subsystem Jan 29 11:15:00.028189 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 29 11:15:00.028198 kernel: audit: initializing netlink subsys (disabled) Jan 29 11:15:00.028207 kernel: audit: type=2000 audit(1738149299.021:1): state=initialized audit_enabled=0 res=1 Jan 29 11:15:00.028220 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 29 11:15:00.028228 kernel: thermal_sys: Registered thermal governor 'user_space' Jan 29 11:15:00.028237 kernel: cpuidle: using governor menu Jan 29 11:15:00.028246 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 29 11:15:00.028255 kernel: dca service started, version 1.12.1 Jan 29 11:15:00.028264 kernel: PCI: Using configuration type 1 for base access Jan 29 11:15:00.028272 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jan 29 11:15:00.028281 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 29 11:15:00.028290 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jan 29 11:15:00.028302 kernel: ACPI: Added _OSI(Module Device) Jan 29 11:15:00.028311 kernel: ACPI: Added _OSI(Processor Device) Jan 29 11:15:00.028319 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jan 29 11:15:00.028328 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 29 11:15:00.028337 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 29 11:15:00.028346 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Jan 29 11:15:00.028354 kernel: ACPI: Interpreter enabled Jan 29 11:15:00.028363 kernel: ACPI: PM: (supports S0 S5) Jan 29 11:15:00.028372 kernel: ACPI: Using IOAPIC for interrupt routing Jan 29 11:15:00.028384 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jan 29 11:15:00.028393 kernel: PCI: Using E820 reservations for host bridge windows Jan 29 11:15:00.028401 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Jan 29 11:15:00.028410 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 29 11:15:00.028645 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jan 29 11:15:00.031239 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Jan 29 11:15:00.031370 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Jan 29 11:15:00.031390 kernel: acpiphp: Slot [3] registered Jan 29 11:15:00.031400 kernel: acpiphp: Slot [4] registered Jan 29 11:15:00.031409 kernel: acpiphp: Slot [5] registered Jan 29 11:15:00.031418 kernel: acpiphp: Slot [6] registered Jan 29 11:15:00.031428 kernel: acpiphp: Slot [7] registered Jan 29 11:15:00.031437 kernel: acpiphp: Slot [8] registered Jan 29 11:15:00.031446 kernel: acpiphp: Slot [9] registered Jan 29 11:15:00.031455 kernel: acpiphp: Slot [10] registered Jan 29 11:15:00.031464 kernel: acpiphp: Slot [11] registered Jan 29 11:15:00.031476 kernel: acpiphp: Slot [12] registered Jan 29 11:15:00.031485 kernel: acpiphp: Slot [13] registered Jan 29 11:15:00.031494 kernel: acpiphp: Slot [14] registered Jan 29 11:15:00.031504 kernel: acpiphp: Slot [15] registered Jan 29 11:15:00.031513 kernel: acpiphp: Slot [16] registered Jan 29 11:15:00.031522 kernel: acpiphp: Slot [17] registered Jan 29 11:15:00.031531 kernel: acpiphp: Slot [18] registered Jan 29 11:15:00.031540 kernel: acpiphp: Slot [19] registered Jan 29 11:15:00.031548 kernel: acpiphp: Slot [20] registered Jan 29 11:15:00.031557 kernel: acpiphp: Slot [21] registered Jan 29 11:15:00.031569 kernel: acpiphp: Slot [22] registered Jan 29 11:15:00.031578 kernel: acpiphp: Slot [23] registered Jan 29 11:15:00.031587 kernel: acpiphp: Slot [24] registered Jan 29 11:15:00.031596 kernel: acpiphp: Slot [25] registered Jan 29 11:15:00.031604 kernel: acpiphp: Slot [26] registered Jan 29 11:15:00.031613 kernel: acpiphp: Slot [27] registered Jan 29 11:15:00.031622 kernel: acpiphp: Slot [28] registered Jan 29 11:15:00.031631 kernel: acpiphp: Slot [29] registered Jan 29 11:15:00.031640 kernel: acpiphp: Slot [30] registered Jan 29 11:15:00.031652 kernel: acpiphp: Slot [31] registered Jan 29 11:15:00.031660 kernel: PCI host bridge to bus 0000:00 Jan 29 11:15:00.031860 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jan 29 11:15:00.031997 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jan 29 11:15:00.032122 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jan 29 11:15:00.032242 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Jan 29 11:15:00.032347 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Jan 29 11:15:00.032472 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 29 11:15:00.032619 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jan 29 11:15:00.034400 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Jan 29 11:15:00.034568 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Jan 29 11:15:00.034743 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Jan 29 11:15:00.034913 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Jan 29 11:15:00.035089 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Jan 29 11:15:00.035274 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Jan 29 11:15:00.035451 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Jan 29 11:15:00.035599 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Jan 29 11:15:00.035820 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Jan 29 11:15:00.036018 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jan 29 11:15:00.036181 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Jan 29 11:15:00.036357 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Jan 29 11:15:00.036568 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Jan 29 11:15:00.039853 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Jan 29 11:15:00.040104 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Jan 29 11:15:00.040342 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Jan 29 11:15:00.040512 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Jan 29 11:15:00.040695 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Jan 29 11:15:00.041033 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:15:00.041224 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Jan 29 11:15:00.041373 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Jan 29 11:15:00.041532 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Jan 29 11:15:00.043804 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jan 29 11:15:00.044039 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Jan 29 11:15:00.044287 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Jan 29 11:15:00.044652 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Jan 29 11:15:00.047053 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Jan 29 11:15:00.047283 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Jan 29 11:15:00.047442 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Jan 29 11:15:00.047585 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Jan 29 11:15:00.047780 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:15:00.047885 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Jan 29 11:15:00.048043 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Jan 29 11:15:00.048158 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Jan 29 11:15:00.048288 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Jan 29 11:15:00.048404 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Jan 29 11:15:00.048502 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Jan 29 11:15:00.048621 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Jan 29 11:15:00.048798 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Jan 29 11:15:00.048940 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Jan 29 11:15:00.049044 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Jan 29 11:15:00.049057 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jan 29 11:15:00.049067 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jan 29 11:15:00.049077 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jan 29 11:15:00.049086 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jan 29 11:15:00.049117 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jan 29 11:15:00.049132 kernel: iommu: Default domain type: Translated Jan 29 11:15:00.049146 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jan 29 11:15:00.049161 kernel: PCI: Using ACPI for IRQ routing Jan 29 11:15:00.049175 kernel: PCI: pci_cache_line_size set to 64 bytes Jan 29 11:15:00.049189 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Jan 29 11:15:00.049203 kernel: e820: reserve RAM buffer [mem 0x7ffd8000-0x7fffffff] Jan 29 11:15:00.049388 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Jan 29 11:15:00.049560 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Jan 29 11:15:00.049802 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Jan 29 11:15:00.049822 kernel: vgaarb: loaded Jan 29 11:15:00.049836 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Jan 29 11:15:00.049850 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Jan 29 11:15:00.049880 kernel: clocksource: Switched to clocksource kvm-clock Jan 29 11:15:00.049893 kernel: VFS: Disk quotas dquot_6.6.0 Jan 29 11:15:00.049906 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 29 11:15:00.049919 kernel: pnp: PnP ACPI init Jan 29 11:15:00.049932 kernel: pnp: PnP ACPI: found 4 devices Jan 29 11:15:00.049952 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jan 29 11:15:00.049964 kernel: NET: Registered PF_INET protocol family Jan 29 11:15:00.049978 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 29 11:15:00.049997 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Jan 29 11:15:00.050011 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 29 11:15:00.050035 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Jan 29 11:15:00.050048 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Jan 29 11:15:00.050061 kernel: TCP: Hash tables configured (established 16384 bind 16384) Jan 29 11:15:00.050075 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:15:00.050092 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Jan 29 11:15:00.050105 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 29 11:15:00.050119 kernel: NET: Registered PF_XDP protocol family Jan 29 11:15:00.050326 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jan 29 11:15:00.050483 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jan 29 11:15:00.050642 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jan 29 11:15:00.050792 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Jan 29 11:15:00.050943 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Jan 29 11:15:00.051129 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Jan 29 11:15:00.051287 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jan 29 11:15:00.051339 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Jan 29 11:15:00.051500 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 48354 usecs Jan 29 11:15:00.051518 kernel: PCI: CLS 0 bytes, default 64 Jan 29 11:15:00.051532 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jan 29 11:15:00.051546 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Jan 29 11:15:00.051559 kernel: Initialise system trusted keyrings Jan 29 11:15:00.051579 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Jan 29 11:15:00.051592 kernel: Key type asymmetric registered Jan 29 11:15:00.051605 kernel: Asymmetric key parser 'x509' registered Jan 29 11:15:00.051618 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Jan 29 11:15:00.051630 kernel: io scheduler mq-deadline registered Jan 29 11:15:00.051644 kernel: io scheduler kyber registered Jan 29 11:15:00.051656 kernel: io scheduler bfq registered Jan 29 11:15:00.051669 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jan 29 11:15:00.051682 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Jan 29 11:15:00.051694 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jan 29 11:15:00.051745 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jan 29 11:15:00.051758 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 29 11:15:00.051781 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jan 29 11:15:00.051794 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jan 29 11:15:00.051807 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jan 29 11:15:00.051819 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jan 29 11:15:00.051832 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jan 29 11:15:00.052034 kernel: rtc_cmos 00:03: RTC can wake from S4 Jan 29 11:15:00.052187 kernel: rtc_cmos 00:03: registered as rtc0 Jan 29 11:15:00.052314 kernel: rtc_cmos 00:03: setting system clock to 2025-01-29T11:14:59 UTC (1738149299) Jan 29 11:15:00.052446 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Jan 29 11:15:00.052466 kernel: intel_pstate: CPU model not supported Jan 29 11:15:00.052481 kernel: NET: Registered PF_INET6 protocol family Jan 29 11:15:00.052495 kernel: Segment Routing with IPv6 Jan 29 11:15:00.052544 kernel: In-situ OAM (IOAM) with IPv6 Jan 29 11:15:00.052558 kernel: NET: Registered PF_PACKET protocol family Jan 29 11:15:00.052579 kernel: Key type dns_resolver registered Jan 29 11:15:00.052593 kernel: IPI shorthand broadcast: enabled Jan 29 11:15:00.052608 kernel: sched_clock: Marking stable (1046007804, 103513558)->(1177535128, -28013766) Jan 29 11:15:00.052622 kernel: registered taskstats version 1 Jan 29 11:15:00.052635 kernel: Loading compiled-in X.509 certificates Jan 29 11:15:00.052664 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.71-flatcar: ede78b3e719729f95eaaf7cb6a5289b567f6ee3e' Jan 29 11:15:00.052678 kernel: Key type .fscrypt registered Jan 29 11:15:00.052692 kernel: Key type fscrypt-provisioning registered Jan 29 11:15:00.052707 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 29 11:15:00.052853 kernel: ima: Allocated hash algorithm: sha1 Jan 29 11:15:00.052867 kernel: ima: No architecture policies found Jan 29 11:15:00.052882 kernel: clk: Disabling unused clocks Jan 29 11:15:00.052897 kernel: Freeing unused kernel image (initmem) memory: 43320K Jan 29 11:15:00.052913 kernel: Write protecting the kernel read-only data: 38912k Jan 29 11:15:00.052958 kernel: Freeing unused kernel image (rodata/data gap) memory: 1776K Jan 29 11:15:00.052978 kernel: Run /init as init process Jan 29 11:15:00.052993 kernel: with arguments: Jan 29 11:15:00.053008 kernel: /init Jan 29 11:15:00.053027 kernel: with environment: Jan 29 11:15:00.053047 kernel: HOME=/ Jan 29 11:15:00.053063 kernel: TERM=linux Jan 29 11:15:00.053079 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jan 29 11:15:00.053117 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:15:00.053153 systemd[1]: Detected virtualization kvm. Jan 29 11:15:00.053169 systemd[1]: Detected architecture x86-64. Jan 29 11:15:00.053184 systemd[1]: Running in initrd. Jan 29 11:15:00.053205 systemd[1]: No hostname configured, using default hostname. Jan 29 11:15:00.053220 systemd[1]: Hostname set to . Jan 29 11:15:00.053239 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:15:00.053328 systemd[1]: Queued start job for default target initrd.target. Jan 29 11:15:00.053348 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:15:00.053365 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:15:00.053392 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 29 11:15:00.053407 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:15:00.053429 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 29 11:15:00.053444 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 29 11:15:00.053461 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 29 11:15:00.053479 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 29 11:15:00.053493 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:15:00.053508 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:15:00.053531 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:15:00.053547 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:15:00.053562 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:15:00.053580 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:15:00.053595 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:15:00.053610 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:15:00.053630 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 29 11:15:00.053646 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 29 11:15:00.053662 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:15:00.053679 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:15:00.053695 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:15:00.053770 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:15:00.053802 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 29 11:15:00.053819 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:15:00.053843 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 29 11:15:00.053859 systemd[1]: Starting systemd-fsck-usr.service... Jan 29 11:15:00.053876 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:15:00.053891 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:15:00.053907 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:00.053923 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 29 11:15:00.053937 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:15:00.053952 systemd[1]: Finished systemd-fsck-usr.service. Jan 29 11:15:00.054035 systemd-journald[184]: Collecting audit messages is disabled. Jan 29 11:15:00.054079 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:15:00.054098 systemd-journald[184]: Journal started Jan 29 11:15:00.054135 systemd-journald[184]: Runtime Journal (/run/log/journal/ae9f47417fa44243958999e743c7a65f) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:15:00.028750 systemd-modules-load[185]: Inserted module 'overlay' Jan 29 11:15:00.098179 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:15:00.098239 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 29 11:15:00.098276 kernel: Bridge firewalling registered Jan 29 11:15:00.087656 systemd-modules-load[185]: Inserted module 'br_netfilter' Jan 29 11:15:00.100184 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:15:00.101264 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:00.107371 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:15:00.117057 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:00.120061 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:15:00.131014 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:15:00.136188 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:15:00.160949 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:15:00.163326 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:15:00.167733 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:00.178024 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 29 11:15:00.178906 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:15:00.182862 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:15:00.203727 dracut-cmdline[215]: dracut-dracut-053 Jan 29 11:15:00.211443 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=8a11404d893165624d9716a125d997be53e2d6cdb0c50a945acda5b62a14eda5 Jan 29 11:15:00.240066 systemd-resolved[217]: Positive Trust Anchors: Jan 29 11:15:00.240083 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:15:00.240121 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:15:00.246976 systemd-resolved[217]: Defaulting to hostname 'linux'. Jan 29 11:15:00.250060 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:15:00.250818 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:15:00.342759 kernel: SCSI subsystem initialized Jan 29 11:15:00.355745 kernel: Loading iSCSI transport class v2.0-870. Jan 29 11:15:00.370773 kernel: iscsi: registered transport (tcp) Jan 29 11:15:00.396742 kernel: iscsi: registered transport (qla4xxx) Jan 29 11:15:00.396846 kernel: QLogic iSCSI HBA Driver Jan 29 11:15:00.471140 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 29 11:15:00.487326 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 29 11:15:00.535056 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 29 11:15:00.535198 kernel: device-mapper: uevent: version 1.0.3 Jan 29 11:15:00.537765 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 29 11:15:00.619812 kernel: raid6: avx2x4 gen() 12896 MB/s Jan 29 11:15:00.636805 kernel: raid6: avx2x2 gen() 14069 MB/s Jan 29 11:15:00.653945 kernel: raid6: avx2x1 gen() 12344 MB/s Jan 29 11:15:00.654108 kernel: raid6: using algorithm avx2x2 gen() 14069 MB/s Jan 29 11:15:00.672033 kernel: raid6: .... xor() 11789 MB/s, rmw enabled Jan 29 11:15:00.672179 kernel: raid6: using avx2x2 recovery algorithm Jan 29 11:15:00.705769 kernel: xor: automatically using best checksumming function avx Jan 29 11:15:00.961242 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 29 11:15:00.986977 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:15:00.995346 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:15:01.039308 systemd-udevd[400]: Using default interface naming scheme 'v255'. Jan 29 11:15:01.051548 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:15:01.064236 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 29 11:15:01.129567 dracut-pre-trigger[406]: rd.md=0: removing MD RAID activation Jan 29 11:15:01.280887 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:15:01.290144 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:15:01.494629 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:15:01.519010 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 29 11:15:01.566176 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 29 11:15:01.570556 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:15:01.574287 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:15:01.575894 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:15:01.583541 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 29 11:15:01.644510 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:15:01.712578 kernel: scsi host0: Virtio SCSI HBA Jan 29 11:15:01.757436 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Jan 29 11:15:01.932112 kernel: ACPI: bus type USB registered Jan 29 11:15:01.932159 kernel: usbcore: registered new interface driver usbfs Jan 29 11:15:01.932181 kernel: usbcore: registered new interface driver hub Jan 29 11:15:01.932201 kernel: cryptd: max_cpu_qlen set to 1000 Jan 29 11:15:01.932222 kernel: usbcore: registered new device driver usb Jan 29 11:15:01.932240 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Jan 29 11:15:01.932487 kernel: AVX2 version of gcm_enc/dec engaged. Jan 29 11:15:01.932527 kernel: AES CTR mode by8 optimization enabled Jan 29 11:15:01.932551 kernel: libata version 3.00 loaded. Jan 29 11:15:01.932574 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 29 11:15:01.932598 kernel: GPT:9289727 != 125829119 Jan 29 11:15:01.932619 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 29 11:15:01.932641 kernel: GPT:9289727 != 125829119 Jan 29 11:15:01.932663 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 29 11:15:01.932684 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:15:01.888081 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:15:01.939202 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Jan 29 11:15:01.941774 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Jan 29 11:15:01.888291 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:01.889234 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:01.889783 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:15:01.890040 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:01.890634 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:01.909599 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:02.000308 kernel: ata_piix 0000:00:01.1: version 2.13 Jan 29 11:15:02.067478 kernel: scsi host1: ata_piix Jan 29 11:15:02.067870 kernel: scsi host2: ata_piix Jan 29 11:15:02.068111 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Jan 29 11:15:02.068134 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Jan 29 11:15:02.091364 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:02.098746 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Jan 29 11:15:02.131988 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Jan 29 11:15:02.132290 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Jan 29 11:15:02.132581 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Jan 29 11:15:02.132812 kernel: hub 1-0:1.0: USB hub found Jan 29 11:15:02.133098 kernel: hub 1-0:1.0: 2 ports detected Jan 29 11:15:02.123513 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 29 11:15:02.155901 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by (udev-worker) (458) Jan 29 11:15:02.207770 kernel: BTRFS: device fsid 7f507843-6957-466b-8fb7-5bee228b170a devid 1 transid 44 /dev/vda3 scanned by (udev-worker) (445) Jan 29 11:15:02.221696 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 29 11:15:02.225068 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:02.248486 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 29 11:15:02.266541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:15:02.284772 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Jan 29 11:15:02.288243 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 29 11:15:02.313875 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 29 11:15:02.352302 disk-uuid[550]: Primary Header is updated. Jan 29 11:15:02.352302 disk-uuid[550]: Secondary Entries is updated. Jan 29 11:15:02.352302 disk-uuid[550]: Secondary Header is updated. Jan 29 11:15:02.365368 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:15:02.384844 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:15:03.391362 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 29 11:15:03.402442 disk-uuid[551]: The operation has completed successfully. Jan 29 11:15:03.556263 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 29 11:15:03.556476 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 29 11:15:03.568200 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 29 11:15:03.578821 sh[563]: Success Jan 29 11:15:03.606833 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jan 29 11:15:03.796004 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 29 11:15:03.799907 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 29 11:15:03.811374 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 29 11:15:03.874837 kernel: BTRFS info (device dm-0): first mount of filesystem 7f507843-6957-466b-8fb7-5bee228b170a Jan 29 11:15:03.874980 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:03.877276 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 29 11:15:03.877447 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 29 11:15:03.883212 kernel: BTRFS info (device dm-0): using free space tree Jan 29 11:15:03.904064 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 29 11:15:03.906385 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 29 11:15:03.915339 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 29 11:15:03.918018 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 29 11:15:03.961324 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:15:03.961440 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:03.961455 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:15:03.971774 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:15:04.000221 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 29 11:15:04.003937 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:15:04.014888 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 29 11:15:04.028289 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 29 11:15:04.211393 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:15:04.224293 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:15:04.283086 ignition[673]: Ignition 2.20.0 Jan 29 11:15:04.283101 ignition[673]: Stage: fetch-offline Jan 29 11:15:04.283165 ignition[673]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:04.287276 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:15:04.283179 ignition[673]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:15:04.283337 ignition[673]: parsed url from cmdline: "" Jan 29 11:15:04.291972 systemd-networkd[750]: lo: Link UP Jan 29 11:15:04.283343 ignition[673]: no config URL provided Jan 29 11:15:04.291979 systemd-networkd[750]: lo: Gained carrier Jan 29 11:15:04.283351 ignition[673]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:15:04.299117 systemd-networkd[750]: Enumeration completed Jan 29 11:15:04.283362 ignition[673]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:15:04.299796 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:15:04.283372 ignition[673]: failed to fetch config: resource requires networking Jan 29 11:15:04.299802 systemd-networkd[750]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Jan 29 11:15:04.283810 ignition[673]: Ignition finished successfully Jan 29 11:15:04.301955 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:15:04.302951 systemd[1]: Reached target network.target - Network. Jan 29 11:15:04.311658 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:15:04.311672 systemd-networkd[750]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 29 11:15:04.312940 systemd-networkd[750]: eth0: Link UP Jan 29 11:15:04.312947 systemd-networkd[750]: eth0: Gained carrier Jan 29 11:15:04.312965 systemd-networkd[750]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Jan 29 11:15:04.317137 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 29 11:15:04.320137 systemd-networkd[750]: eth1: Link UP Jan 29 11:15:04.320162 systemd-networkd[750]: eth1: Gained carrier Jan 29 11:15:04.320191 systemd-networkd[750]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 29 11:15:04.353190 systemd-networkd[750]: eth1: DHCPv4 address 10.124.0.7/20 acquired from 169.254.169.253 Jan 29 11:15:04.359958 systemd-networkd[750]: eth0: DHCPv4 address 143.198.151.197/20, gateway 143.198.144.1 acquired from 169.254.169.253 Jan 29 11:15:04.375158 ignition[757]: Ignition 2.20.0 Jan 29 11:15:04.375175 ignition[757]: Stage: fetch Jan 29 11:15:04.375498 ignition[757]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:04.375515 ignition[757]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:15:04.375692 ignition[757]: parsed url from cmdline: "" Jan 29 11:15:04.375699 ignition[757]: no config URL provided Jan 29 11:15:04.375708 ignition[757]: reading system config file "/usr/lib/ignition/user.ign" Jan 29 11:15:04.376817 ignition[757]: no config at "/usr/lib/ignition/user.ign" Jan 29 11:15:04.376877 ignition[757]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Jan 29 11:15:04.408780 ignition[757]: GET result: OK Jan 29 11:15:04.409523 ignition[757]: parsing config with SHA512: 9425087bd19c358804617bfc87a68b3ee2cda97219431f309f5d2fb62ec1a50a59cc8dbeee0d910a8fea2cfe137c5efdc1aaff319dae3020b706b0ecb6eb6bd3 Jan 29 11:15:04.416065 unknown[757]: fetched base config from "system" Jan 29 11:15:04.416681 ignition[757]: fetch: fetch complete Jan 29 11:15:04.416084 unknown[757]: fetched base config from "system" Jan 29 11:15:04.416690 ignition[757]: fetch: fetch passed Jan 29 11:15:04.416094 unknown[757]: fetched user config from "digitalocean" Jan 29 11:15:04.416807 ignition[757]: Ignition finished successfully Jan 29 11:15:04.420517 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 29 11:15:04.427053 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 29 11:15:04.485847 ignition[764]: Ignition 2.20.0 Jan 29 11:15:04.485882 ignition[764]: Stage: kargs Jan 29 11:15:04.486245 ignition[764]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:04.486266 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:15:04.487623 ignition[764]: kargs: kargs passed Jan 29 11:15:04.487743 ignition[764]: Ignition finished successfully Jan 29 11:15:04.490556 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 29 11:15:04.499077 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 29 11:15:04.540402 ignition[770]: Ignition 2.20.0 Jan 29 11:15:04.540421 ignition[770]: Stage: disks Jan 29 11:15:04.541207 ignition[770]: no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:04.541229 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:15:04.542734 ignition[770]: disks: disks passed Jan 29 11:15:04.544568 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 29 11:15:04.542829 ignition[770]: Ignition finished successfully Jan 29 11:15:04.550030 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 29 11:15:04.551162 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 29 11:15:04.552562 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:15:04.553908 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:15:04.554988 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:15:04.567127 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 29 11:15:04.593149 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 29 11:15:04.596681 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 29 11:15:04.606097 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 29 11:15:04.754203 kernel: EXT4-fs (vda9): mounted filesystem 59ba8ffc-e6b0-4bb4-a36e-13a47bd6ad99 r/w with ordered data mode. Quota mode: none. Jan 29 11:15:04.755567 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 29 11:15:04.757263 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 29 11:15:04.772029 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:15:04.777258 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 29 11:15:04.781261 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Jan 29 11:15:04.792558 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 29 11:15:04.793202 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 29 11:15:04.793252 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:15:04.800881 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by mount (786) Jan 29 11:15:04.803521 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 29 11:15:04.820542 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:15:04.820581 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:04.820621 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:15:04.820654 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:15:04.827205 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:15:04.837183 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 29 11:15:04.923586 coreos-metadata[789]: Jan 29 11:15:04.923 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:15:04.935144 coreos-metadata[788]: Jan 29 11:15:04.934 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:15:04.937561 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Jan 29 11:15:04.940398 coreos-metadata[789]: Jan 29 11:15:04.939 INFO Fetch successful Jan 29 11:15:04.944460 coreos-metadata[789]: Jan 29 11:15:04.944 INFO wrote hostname ci-4186.1.0-b-c9bf0051f1 to /sysroot/etc/hostname Jan 29 11:15:04.945514 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:15:04.950019 coreos-metadata[788]: Jan 29 11:15:04.949 INFO Fetch successful Jan 29 11:15:04.953737 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Jan 29 11:15:04.958932 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Jan 29 11:15:04.959667 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Jan 29 11:15:04.963422 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory Jan 29 11:15:04.972541 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Jan 29 11:15:05.127698 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 29 11:15:05.132044 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 29 11:15:05.136016 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 29 11:15:05.150339 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 29 11:15:05.151093 kernel: BTRFS info (device vda6): last unmount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:15:05.176276 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 29 11:15:05.188329 ignition[906]: INFO : Ignition 2.20.0 Jan 29 11:15:05.190611 ignition[906]: INFO : Stage: mount Jan 29 11:15:05.190611 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:05.190611 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:15:05.190611 ignition[906]: INFO : mount: mount passed Jan 29 11:15:05.190611 ignition[906]: INFO : Ignition finished successfully Jan 29 11:15:05.193273 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 29 11:15:05.199052 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 29 11:15:05.221205 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 29 11:15:05.247769 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (918) Jan 29 11:15:05.251187 kernel: BTRFS info (device vda6): first mount of filesystem de2056f8-fbde-4b85-b887-0a28f289d968 Jan 29 11:15:05.251293 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Jan 29 11:15:05.251318 kernel: BTRFS info (device vda6): using free space tree Jan 29 11:15:05.257777 kernel: BTRFS info (device vda6): auto enabling async discard Jan 29 11:15:05.259825 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 29 11:15:05.303779 ignition[935]: INFO : Ignition 2.20.0 Jan 29 11:15:05.303779 ignition[935]: INFO : Stage: files Jan 29 11:15:05.305510 ignition[935]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:05.305510 ignition[935]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:15:05.305510 ignition[935]: DEBUG : files: compiled without relabeling support, skipping Jan 29 11:15:05.307756 ignition[935]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 29 11:15:05.307756 ignition[935]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 29 11:15:05.311834 ignition[935]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 29 11:15:05.312644 ignition[935]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 29 11:15:05.312644 ignition[935]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 29 11:15:05.312589 unknown[935]: wrote ssh authorized keys file for user: core Jan 29 11:15:05.315638 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Jan 29 11:15:05.316365 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Jan 29 11:15:05.316365 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:15:05.319685 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 29 11:15:05.319685 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:15:05.319685 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:15:05.319685 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:15:05.319685 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.31.0-x86-64.raw: attempt #1 Jan 29 11:15:05.603350 systemd-networkd[750]: eth0: Gained IPv6LL Jan 29 11:15:05.981254 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Jan 29 11:15:06.051354 systemd-networkd[750]: eth1: Gained IPv6LL Jan 29 11:15:06.527129 ignition[935]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.0-x86-64.raw" Jan 29 11:15:06.529869 ignition[935]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:15:06.529869 ignition[935]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 29 11:15:06.529869 ignition[935]: INFO : files: files passed Jan 29 11:15:06.529869 ignition[935]: INFO : Ignition finished successfully Jan 29 11:15:06.530799 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 29 11:15:06.537078 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 29 11:15:06.543973 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 29 11:15:06.548292 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 29 11:15:06.548444 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 29 11:15:06.564314 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:06.564314 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:06.567098 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 29 11:15:06.570717 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:15:06.571440 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 29 11:15:06.583106 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 29 11:15:06.621866 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 29 11:15:06.622081 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 29 11:15:06.623647 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 29 11:15:06.624121 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 29 11:15:06.625370 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 29 11:15:06.638555 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 29 11:15:06.655862 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:15:06.662037 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 29 11:15:06.689859 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:15:06.690646 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:15:06.691537 systemd[1]: Stopped target timers.target - Timer Units. Jan 29 11:15:06.692296 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 29 11:15:06.692491 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 29 11:15:06.694032 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 29 11:15:06.694808 systemd[1]: Stopped target basic.target - Basic System. Jan 29 11:15:06.695693 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 29 11:15:06.696522 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 29 11:15:06.697606 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 29 11:15:06.698583 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 29 11:15:06.699495 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 29 11:15:06.700758 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 29 11:15:06.702133 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 29 11:15:06.703143 systemd[1]: Stopped target swap.target - Swaps. Jan 29 11:15:06.704056 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 29 11:15:06.704310 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 29 11:15:06.705624 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:15:06.706414 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:15:06.707436 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 29 11:15:06.707597 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:15:06.708346 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 29 11:15:06.708639 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 29 11:15:06.710524 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 29 11:15:06.710768 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 29 11:15:06.712074 systemd[1]: ignition-files.service: Deactivated successfully. Jan 29 11:15:06.712309 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 29 11:15:06.713832 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 29 11:15:06.714016 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 29 11:15:06.732042 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 29 11:15:06.735141 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 29 11:15:06.735744 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 29 11:15:06.736969 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:15:06.739056 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 29 11:15:06.739819 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 29 11:15:06.749797 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 29 11:15:06.749964 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 29 11:15:06.759199 ignition[987]: INFO : Ignition 2.20.0 Jan 29 11:15:06.759199 ignition[987]: INFO : Stage: umount Jan 29 11:15:06.760793 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 29 11:15:06.760793 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Jan 29 11:15:06.763110 ignition[987]: INFO : umount: umount passed Jan 29 11:15:06.763110 ignition[987]: INFO : Ignition finished successfully Jan 29 11:15:06.765005 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 29 11:15:06.765127 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 29 11:15:06.766355 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 29 11:15:06.766488 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 29 11:15:06.768452 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 29 11:15:06.768527 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 29 11:15:06.769107 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 29 11:15:06.769160 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 29 11:15:06.772032 systemd[1]: Stopped target network.target - Network. Jan 29 11:15:06.772508 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 29 11:15:06.772587 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 29 11:15:06.773208 systemd[1]: Stopped target paths.target - Path Units. Jan 29 11:15:06.773584 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 29 11:15:06.773915 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:15:06.776575 systemd[1]: Stopped target slices.target - Slice Units. Jan 29 11:15:06.777101 systemd[1]: Stopped target sockets.target - Socket Units. Jan 29 11:15:06.777454 systemd[1]: iscsid.socket: Deactivated successfully. Jan 29 11:15:06.777507 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 29 11:15:06.786325 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 29 11:15:06.786386 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 29 11:15:06.787025 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 29 11:15:06.787096 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 29 11:15:06.787580 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 29 11:15:06.787649 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 29 11:15:06.788424 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 29 11:15:06.789224 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 29 11:15:06.791016 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 29 11:15:06.797803 systemd-networkd[750]: eth1: DHCPv6 lease lost Jan 29 11:15:06.806909 systemd-networkd[750]: eth0: DHCPv6 lease lost Jan 29 11:15:06.811124 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 29 11:15:06.811264 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 29 11:15:06.818156 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 29 11:15:06.818312 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 29 11:15:06.827484 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 29 11:15:06.827552 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:15:06.835129 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 29 11:15:06.841315 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 29 11:15:06.841411 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 29 11:15:06.841990 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 29 11:15:06.842053 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:15:06.865630 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 29 11:15:06.865760 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 29 11:15:06.866341 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 29 11:15:06.866404 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:15:06.867232 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:15:06.879172 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 29 11:15:06.879298 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 29 11:15:06.880267 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 29 11:15:06.880399 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 29 11:15:06.884177 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 29 11:15:06.884392 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:15:06.888138 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 29 11:15:06.888750 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 29 11:15:06.890634 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 29 11:15:06.890816 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 29 11:15:06.891328 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 29 11:15:06.891370 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:15:06.892140 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 29 11:15:06.892199 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 29 11:15:06.893555 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 29 11:15:06.893627 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 29 11:15:06.894466 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 29 11:15:06.894537 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 29 11:15:06.901114 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 29 11:15:06.901730 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 29 11:15:06.901820 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:15:06.902619 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 29 11:15:06.902687 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:15:06.903902 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 29 11:15:06.903967 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:15:06.907360 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:15:06.907454 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:06.913787 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 29 11:15:06.913935 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 29 11:15:06.915603 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 29 11:15:06.922967 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 29 11:15:06.933866 systemd[1]: Switching root. Jan 29 11:15:06.997929 systemd-journald[184]: Received SIGTERM from PID 1 (systemd). Jan 29 11:15:06.998011 systemd-journald[184]: Journal stopped Jan 29 11:15:08.430379 kernel: SELinux: policy capability network_peer_controls=1 Jan 29 11:15:08.430457 kernel: SELinux: policy capability open_perms=1 Jan 29 11:15:08.430471 kernel: SELinux: policy capability extended_socket_class=1 Jan 29 11:15:08.430489 kernel: SELinux: policy capability always_check_network=0 Jan 29 11:15:08.430504 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 29 11:15:08.430520 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 29 11:15:08.430532 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 29 11:15:08.430547 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 29 11:15:08.430559 kernel: audit: type=1403 audit(1738149307.211:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 29 11:15:08.430579 systemd[1]: Successfully loaded SELinux policy in 51.995ms. Jan 29 11:15:08.430613 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 14.594ms. Jan 29 11:15:08.430628 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 29 11:15:08.430641 systemd[1]: Detected virtualization kvm. Jan 29 11:15:08.430653 systemd[1]: Detected architecture x86-64. Jan 29 11:15:08.430665 systemd[1]: Detected first boot. Jan 29 11:15:08.430682 systemd[1]: Hostname set to . Jan 29 11:15:08.430712 systemd[1]: Initializing machine ID from VM UUID. Jan 29 11:15:08.433510 zram_generator::config[1029]: No configuration found. Jan 29 11:15:08.433549 systemd[1]: Populated /etc with preset unit settings. Jan 29 11:15:08.433566 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 29 11:15:08.433580 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 29 11:15:08.433595 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 29 11:15:08.433614 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 29 11:15:08.433642 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 29 11:15:08.433663 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 29 11:15:08.433683 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 29 11:15:08.433696 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 29 11:15:08.434228 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 29 11:15:08.434257 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 29 11:15:08.434284 systemd[1]: Created slice user.slice - User and Session Slice. Jan 29 11:15:08.434305 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 29 11:15:08.434324 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 29 11:15:08.434349 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 29 11:15:08.434368 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 29 11:15:08.434387 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 29 11:15:08.434406 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 29 11:15:08.434424 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 29 11:15:08.434446 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 29 11:15:08.434465 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 29 11:15:08.434483 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 29 11:15:08.434507 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 29 11:15:08.434526 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 29 11:15:08.434547 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 29 11:15:08.434573 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 29 11:15:08.434594 systemd[1]: Reached target slices.target - Slice Units. Jan 29 11:15:08.434617 systemd[1]: Reached target swap.target - Swaps. Jan 29 11:15:08.434662 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 29 11:15:08.434685 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 29 11:15:08.441794 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 29 11:15:08.441838 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 29 11:15:08.441853 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 29 11:15:08.441867 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 29 11:15:08.441880 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 29 11:15:08.441893 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 29 11:15:08.441906 systemd[1]: Mounting media.mount - External Media Directory... Jan 29 11:15:08.441919 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:08.441941 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 29 11:15:08.441954 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 29 11:15:08.441966 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 29 11:15:08.441986 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 29 11:15:08.442006 systemd[1]: Reached target machines.target - Containers. Jan 29 11:15:08.442026 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 29 11:15:08.442047 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:15:08.442061 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 29 11:15:08.442079 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 29 11:15:08.442092 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:15:08.442106 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:15:08.442118 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:15:08.442131 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 29 11:15:08.442151 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:15:08.442172 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 29 11:15:08.442187 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 29 11:15:08.442201 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 29 11:15:08.442221 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 29 11:15:08.442244 systemd[1]: Stopped systemd-fsck-usr.service. Jan 29 11:15:08.442263 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 29 11:15:08.442285 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 29 11:15:08.442304 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 29 11:15:08.442324 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 29 11:15:08.442342 kernel: loop: module loaded Jan 29 11:15:08.442362 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 29 11:15:08.442381 systemd[1]: verity-setup.service: Deactivated successfully. Jan 29 11:15:08.442403 systemd[1]: Stopped verity-setup.service. Jan 29 11:15:08.442421 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:08.442456 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 29 11:15:08.442472 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 29 11:15:08.442485 systemd[1]: Mounted media.mount - External Media Directory. Jan 29 11:15:08.442498 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 29 11:15:08.456997 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 29 11:15:08.457094 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 29 11:15:08.457118 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 29 11:15:08.457168 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 29 11:15:08.457193 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 29 11:15:08.457216 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:15:08.457238 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:15:08.457261 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:15:08.457283 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:15:08.457305 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:15:08.457328 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:15:08.457413 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 29 11:15:08.457448 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 29 11:15:08.457475 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 29 11:15:08.457497 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 29 11:15:08.457565 systemd-journald[1102]: Collecting audit messages is disabled. Jan 29 11:15:08.457609 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:15:08.457631 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 29 11:15:08.457653 kernel: fuse: init (API version 7.39) Jan 29 11:15:08.457941 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 29 11:15:08.458057 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 29 11:15:08.458083 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 29 11:15:08.458105 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 29 11:15:08.458126 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 29 11:15:08.458146 systemd-journald[1102]: Journal started Jan 29 11:15:08.458186 systemd-journald[1102]: Runtime Journal (/run/log/journal/ae9f47417fa44243958999e743c7a65f) is 4.9M, max 39.3M, 34.4M free. Jan 29 11:15:08.467282 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 29 11:15:08.467378 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 29 11:15:08.467407 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 29 11:15:08.014753 systemd[1]: Queued start job for default target multi-user.target. Jan 29 11:15:08.036763 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 29 11:15:08.037395 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 29 11:15:08.484784 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 29 11:15:08.484874 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 29 11:15:08.495345 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 29 11:15:08.495433 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:15:08.511182 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 29 11:15:08.511301 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:15:08.517176 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 29 11:15:08.531730 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 29 11:15:08.538736 systemd[1]: Started systemd-journald.service - Journal Service. Jan 29 11:15:08.539521 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 29 11:15:08.552381 kernel: ACPI: bus type drm_connector registered Jan 29 11:15:08.556911 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 29 11:15:08.566536 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:15:08.568285 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:15:08.572805 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 29 11:15:08.615907 kernel: loop0: detected capacity change from 0 to 8 Jan 29 11:15:08.612068 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 29 11:15:08.618226 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 29 11:15:08.626527 systemd-journald[1102]: Time spent on flushing to /var/log/journal/ae9f47417fa44243958999e743c7a65f is 119.265ms for 974 entries. Jan 29 11:15:08.626527 systemd-journald[1102]: System Journal (/var/log/journal/ae9f47417fa44243958999e743c7a65f) is 8.0M, max 195.6M, 187.6M free. Jan 29 11:15:08.781553 systemd-journald[1102]: Received client request to flush runtime journal. Jan 29 11:15:08.781606 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 29 11:15:08.781623 kernel: loop1: detected capacity change from 0 to 141000 Jan 29 11:15:08.644089 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 29 11:15:08.647293 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 29 11:15:08.657112 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 29 11:15:08.672526 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. Jan 29 11:15:08.672548 systemd-tmpfiles[1119]: ACLs are not supported, ignoring. Jan 29 11:15:08.703570 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 29 11:15:08.714156 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 29 11:15:08.752333 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 29 11:15:08.760958 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 29 11:15:08.762461 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 29 11:15:08.764334 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 29 11:15:08.778056 udevadm[1166]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 29 11:15:08.795417 kernel: loop2: detected capacity change from 0 to 138184 Jan 29 11:15:08.795492 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 29 11:15:08.853841 kernel: loop3: detected capacity change from 0 to 205544 Jan 29 11:15:08.860193 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 29 11:15:08.872840 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 29 11:15:08.931732 kernel: loop4: detected capacity change from 0 to 8 Jan 29 11:15:08.935618 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 29 11:15:08.936034 systemd-tmpfiles[1173]: ACLs are not supported, ignoring. Jan 29 11:15:08.941957 kernel: loop5: detected capacity change from 0 to 141000 Jan 29 11:15:08.942984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 29 11:15:08.981050 kernel: loop6: detected capacity change from 0 to 138184 Jan 29 11:15:09.005734 kernel: loop7: detected capacity change from 0 to 205544 Jan 29 11:15:09.033982 (sd-merge)[1177]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Jan 29 11:15:09.037016 (sd-merge)[1177]: Merged extensions into '/usr'. Jan 29 11:15:09.047307 systemd[1]: Reloading requested from client PID 1130 ('systemd-sysext') (unit systemd-sysext.service)... Jan 29 11:15:09.047331 systemd[1]: Reloading... Jan 29 11:15:09.233738 zram_generator::config[1207]: No configuration found. Jan 29 11:15:09.430027 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:15:09.459740 ldconfig[1126]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 29 11:15:09.497961 systemd[1]: Reloading finished in 449 ms. Jan 29 11:15:09.539898 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 29 11:15:09.543664 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 29 11:15:09.563114 systemd[1]: Starting ensure-sysext.service... Jan 29 11:15:09.566932 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 29 11:15:09.591184 systemd[1]: Reloading requested from client PID 1247 ('systemctl') (unit ensure-sysext.service)... Jan 29 11:15:09.591211 systemd[1]: Reloading... Jan 29 11:15:09.636527 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 29 11:15:09.637025 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 29 11:15:09.639325 systemd-tmpfiles[1248]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 29 11:15:09.639652 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 29 11:15:09.639729 systemd-tmpfiles[1248]: ACLs are not supported, ignoring. Jan 29 11:15:09.650659 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:15:09.650674 systemd-tmpfiles[1248]: Skipping /boot Jan 29 11:15:09.681821 systemd-tmpfiles[1248]: Detected autofs mount point /boot during canonicalization of boot. Jan 29 11:15:09.681835 systemd-tmpfiles[1248]: Skipping /boot Jan 29 11:15:09.765086 zram_generator::config[1275]: No configuration found. Jan 29 11:15:09.948178 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:15:10.017369 systemd[1]: Reloading finished in 425 ms. Jan 29 11:15:10.039872 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 29 11:15:10.055533 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 29 11:15:10.069120 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:15:10.078443 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 29 11:15:10.086072 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 29 11:15:10.107927 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 29 11:15:10.113013 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 29 11:15:10.118987 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 29 11:15:10.130590 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:10.131019 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:15:10.138232 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:15:10.147128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:15:10.153895 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:15:10.154639 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:15:10.154862 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:10.173132 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 29 11:15:10.176989 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:10.177286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:15:10.177544 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:15:10.177695 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:10.186516 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:10.186938 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:15:10.194726 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 29 11:15:10.196041 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:15:10.196293 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:10.201814 systemd[1]: Finished ensure-sysext.service. Jan 29 11:15:10.216027 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 29 11:15:10.217153 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 29 11:15:10.221415 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:15:10.226327 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:15:10.226603 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:15:10.239292 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:15:10.239586 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:15:10.243529 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:15:10.244983 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:15:10.249697 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:15:10.251644 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:15:10.257246 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 29 11:15:10.269770 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 29 11:15:10.277905 systemd-udevd[1330]: Using default interface naming scheme 'v255'. Jan 29 11:15:10.283091 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 29 11:15:10.284290 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 29 11:15:10.285803 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 29 11:15:10.303600 augenrules[1360]: No rules Jan 29 11:15:10.307324 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:15:10.307612 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:15:10.313672 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 29 11:15:10.325129 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 29 11:15:10.336019 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 29 11:15:10.351928 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 29 11:15:10.460869 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 29 11:15:10.541244 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 29 11:15:10.542496 systemd[1]: Reached target time-set.target - System Time Set. Jan 29 11:15:10.556854 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1379) Jan 29 11:15:10.570664 systemd-networkd[1376]: lo: Link UP Jan 29 11:15:10.570673 systemd-networkd[1376]: lo: Gained carrier Jan 29 11:15:10.579951 systemd-networkd[1376]: Enumeration completed Jan 29 11:15:10.580246 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 29 11:15:10.580641 systemd-networkd[1376]: eth0: Configuring with /run/systemd/network/10-e6:2f:cb:a8:2a:d6.network. Jan 29 11:15:10.583577 systemd-resolved[1329]: Positive Trust Anchors: Jan 29 11:15:10.583593 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 29 11:15:10.583630 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 29 11:15:10.586658 systemd-networkd[1376]: eth1: Configuring with /run/systemd/network/10-36:d8:40:5b:48:8e.network. Jan 29 11:15:10.587380 systemd-networkd[1376]: eth0: Link UP Jan 29 11:15:10.587390 systemd-networkd[1376]: eth0: Gained carrier Jan 29 11:15:10.588019 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 29 11:15:10.593996 systemd-networkd[1376]: eth1: Link UP Jan 29 11:15:10.594008 systemd-networkd[1376]: eth1: Gained carrier Jan 29 11:15:10.600730 systemd-resolved[1329]: Using system hostname 'ci-4186.1.0-b-c9bf0051f1'. Jan 29 11:15:10.604137 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 29 11:15:10.606996 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 29 11:15:10.607940 systemd[1]: Reached target network.target - Network. Jan 29 11:15:10.608624 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 29 11:15:10.635915 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Jan 29 11:15:10.636558 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:10.637157 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 29 11:15:10.645226 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 29 11:15:10.647956 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 29 11:15:10.653031 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 29 11:15:10.653618 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 29 11:15:10.653665 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 29 11:15:10.653681 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jan 29 11:15:10.677457 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 29 11:15:10.677883 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 29 11:15:10.681743 kernel: ISO 9660 Extensions: RRIP_1991A Jan 29 11:15:10.686153 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Jan 29 11:15:10.697264 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 29 11:15:10.697464 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 29 11:15:10.698976 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jan 29 11:15:10.698924 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 29 11:15:10.705509 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 29 11:15:10.705795 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 29 11:15:10.707045 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 29 11:15:10.712772 kernel: ACPI: button: Power Button [PWRF] Jan 29 11:15:10.733785 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Jan 29 11:15:10.780742 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jan 29 11:15:10.811781 kernel: mousedev: PS/2 mouse device common for all mice Jan 29 11:15:10.846676 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 29 11:15:10.858167 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Jan 29 11:15:10.858315 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Jan 29 11:15:10.865786 kernel: Console: switching to colour dummy device 80x25 Jan 29 11:15:10.866791 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 29 11:15:10.866866 kernel: [drm] features: -context_init Jan 29 11:15:10.868105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 29 11:15:10.870836 kernel: [drm] number of scanouts: 1 Jan 29 11:15:10.870949 kernel: [drm] number of cap sets: 0 Jan 29 11:15:10.880843 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Jan 29 11:15:10.882920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:10.913924 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Jan 29 11:15:10.914032 kernel: Console: switching to colour frame buffer device 128x48 Jan 29 11:15:10.923759 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 29 11:15:10.946655 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 29 11:15:10.952014 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:15:10.953190 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:10.975144 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:10.982468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 29 11:15:10.982810 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:10.993056 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 29 11:15:11.028029 kernel: EDAC MC: Ver: 3.0.0 Jan 29 11:15:11.074161 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 29 11:15:11.083130 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 29 11:15:11.104363 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 29 11:15:11.111676 lvm[1429]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:15:11.145529 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 29 11:15:11.146682 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 29 11:15:11.147628 systemd[1]: Reached target sysinit.target - System Initialization. Jan 29 11:15:11.148065 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 29 11:15:11.148268 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 29 11:15:11.148846 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 29 11:15:11.149133 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 29 11:15:11.149478 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 29 11:15:11.149696 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 29 11:15:11.149845 systemd[1]: Reached target paths.target - Path Units. Jan 29 11:15:11.149974 systemd[1]: Reached target timers.target - Timer Units. Jan 29 11:15:11.153630 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 29 11:15:11.157044 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 29 11:15:11.172372 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 29 11:15:11.184031 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 29 11:15:11.187218 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 29 11:15:11.188646 systemd[1]: Reached target sockets.target - Socket Units. Jan 29 11:15:11.191371 systemd[1]: Reached target basic.target - Basic System. Jan 29 11:15:11.192397 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:15:11.192450 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 29 11:15:11.193111 lvm[1435]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 29 11:15:11.201015 systemd[1]: Starting containerd.service - containerd container runtime... Jan 29 11:15:11.214046 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 29 11:15:11.221041 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 29 11:15:11.228849 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 29 11:15:11.234030 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 29 11:15:11.236336 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 29 11:15:11.241319 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 29 11:15:11.253120 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 29 11:15:11.260039 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 29 11:15:11.275038 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 29 11:15:11.276394 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 29 11:15:11.279216 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 29 11:15:11.293325 systemd[1]: Starting update-engine.service - Update Engine... Jan 29 11:15:11.300039 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 29 11:15:11.303668 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 29 11:15:11.312454 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 29 11:15:11.313026 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 29 11:15:11.319974 systemd[1]: motdgen.service: Deactivated successfully. Jan 29 11:15:11.327945 jq[1441]: false Jan 29 11:15:11.320814 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 29 11:15:11.341231 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 29 11:15:11.341561 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 29 11:15:11.372414 dbus-daemon[1438]: [system] SELinux support is enabled Jan 29 11:15:11.372922 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 29 11:15:11.373771 extend-filesystems[1442]: Found loop4 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found loop5 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found loop6 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found loop7 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found vda Jan 29 11:15:11.382200 extend-filesystems[1442]: Found vda1 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found vda2 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found vda3 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found usr Jan 29 11:15:11.382200 extend-filesystems[1442]: Found vda4 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found vda6 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found vda7 Jan 29 11:15:11.382200 extend-filesystems[1442]: Found vda9 Jan 29 11:15:11.382200 extend-filesystems[1442]: Checking size of /dev/vda9 Jan 29 11:15:11.428562 jq[1455]: true Jan 29 11:15:11.433920 coreos-metadata[1437]: Jan 29 11:15:11.415 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:15:11.434292 update_engine[1450]: I20250129 11:15:11.385578 1450 main.cc:92] Flatcar Update Engine starting Jan 29 11:15:11.434292 update_engine[1450]: I20250129 11:15:11.409039 1450 update_check_scheduler.cc:74] Next update check in 6m39s Jan 29 11:15:11.399876 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 29 11:15:11.399919 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 29 11:15:11.437893 jq[1467]: true Jan 29 11:15:11.405926 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 29 11:15:11.406017 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Jan 29 11:15:11.406040 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 29 11:15:11.408463 (ntainerd)[1464]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 29 11:15:11.408771 systemd[1]: Started update-engine.service - Update Engine. Jan 29 11:15:11.430501 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 29 11:15:11.446160 coreos-metadata[1437]: Jan 29 11:15:11.445 INFO Fetch successful Jan 29 11:15:11.459903 extend-filesystems[1442]: Resized partition /dev/vda9 Jan 29 11:15:11.475832 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Jan 29 11:15:11.475959 extend-filesystems[1477]: resize2fs 1.47.1 (20-May-2024) Jan 29 11:15:11.541430 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (1378) Jan 29 11:15:11.568956 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 29 11:15:11.569389 systemd-logind[1447]: New seat seat0. Jan 29 11:15:11.583862 systemd-logind[1447]: Watching system buttons on /dev/input/event1 (Power Button) Jan 29 11:15:11.583890 systemd-logind[1447]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jan 29 11:15:11.588279 systemd[1]: Started systemd-logind.service - User Login Management. Jan 29 11:15:11.602170 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 29 11:15:11.729275 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Jan 29 11:15:11.735694 bash[1496]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:15:11.740430 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 29 11:15:11.754302 systemd[1]: Starting sshkeys.service... Jan 29 11:15:11.800508 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 29 11:15:11.808167 locksmithd[1470]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 29 11:15:11.812370 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 29 11:15:11.819501 extend-filesystems[1477]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 29 11:15:11.819501 extend-filesystems[1477]: old_desc_blocks = 1, new_desc_blocks = 8 Jan 29 11:15:11.819501 extend-filesystems[1477]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Jan 29 11:15:11.828295 extend-filesystems[1442]: Resized filesystem in /dev/vda9 Jan 29 11:15:11.828295 extend-filesystems[1442]: Found vdb Jan 29 11:15:11.820347 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 29 11:15:11.820875 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 29 11:15:11.875001 coreos-metadata[1508]: Jan 29 11:15:11.874 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Jan 29 11:15:11.895733 coreos-metadata[1508]: Jan 29 11:15:11.892 INFO Fetch successful Jan 29 11:15:11.909761 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 29 11:15:11.912726 unknown[1508]: wrote ssh authorized keys file for user: core Jan 29 11:15:11.945559 update-ssh-keys[1513]: Updated "/home/core/.ssh/authorized_keys" Jan 29 11:15:11.946307 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 29 11:15:11.952529 systemd[1]: Finished sshkeys.service. Jan 29 11:15:12.002999 systemd-networkd[1376]: eth0: Gained IPv6LL Jan 29 11:15:12.003531 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 29 11:15:12.009157 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 29 11:15:12.013592 systemd[1]: Reached target network-online.target - Network is Online. Jan 29 11:15:12.019276 sshd_keygen[1471]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 29 11:15:12.023010 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:15:12.033317 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 29 11:15:12.057003 containerd[1464]: time="2025-01-29T11:15:12.056870420Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Jan 29 11:15:12.079844 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 29 11:15:12.090263 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 29 11:15:12.099241 systemd[1]: Started sshd@0-143.198.151.197:22-139.178.89.65:34994.service - OpenSSH per-connection server daemon (139.178.89.65:34994). Jan 29 11:15:12.118125 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 29 11:15:12.130265 systemd[1]: issuegen.service: Deactivated successfully. Jan 29 11:15:12.130560 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 29 11:15:12.145339 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 29 11:15:12.159035 containerd[1464]: time="2025-01-29T11:15:12.156978527Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159035 containerd[1464]: time="2025-01-29T11:15:12.158960404Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.71-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159035 containerd[1464]: time="2025-01-29T11:15:12.158994963Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 29 11:15:12.159035 containerd[1464]: time="2025-01-29T11:15:12.159012768Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 29 11:15:12.159271 containerd[1464]: time="2025-01-29T11:15:12.159188903Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 29 11:15:12.159271 containerd[1464]: time="2025-01-29T11:15:12.159203864Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159271 containerd[1464]: time="2025-01-29T11:15:12.159255371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159271 containerd[1464]: time="2025-01-29T11:15:12.159270268Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159791 containerd[1464]: time="2025-01-29T11:15:12.159442609Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159791 containerd[1464]: time="2025-01-29T11:15:12.159465148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159791 containerd[1464]: time="2025-01-29T11:15:12.159477602Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159791 containerd[1464]: time="2025-01-29T11:15:12.159486623Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159791 containerd[1464]: time="2025-01-29T11:15:12.159554523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:12.159791 containerd[1464]: time="2025-01-29T11:15:12.159781618Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 29 11:15:12.160013 containerd[1464]: time="2025-01-29T11:15:12.159981694Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 29 11:15:12.160013 containerd[1464]: time="2025-01-29T11:15:12.160008646Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 29 11:15:12.160165 containerd[1464]: time="2025-01-29T11:15:12.160131876Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 29 11:15:12.160194 containerd[1464]: time="2025-01-29T11:15:12.160186719Z" level=info msg="metadata content store policy set" policy=shared Jan 29 11:15:12.172024 containerd[1464]: time="2025-01-29T11:15:12.171956204Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 29 11:15:12.173396 containerd[1464]: time="2025-01-29T11:15:12.172068211Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 29 11:15:12.173396 containerd[1464]: time="2025-01-29T11:15:12.172096093Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 29 11:15:12.173396 containerd[1464]: time="2025-01-29T11:15:12.172119585Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 29 11:15:12.173396 containerd[1464]: time="2025-01-29T11:15:12.172140050Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 29 11:15:12.173396 containerd[1464]: time="2025-01-29T11:15:12.172397557Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 29 11:15:12.173396 containerd[1464]: time="2025-01-29T11:15:12.172639035Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173410865Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173434346Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173463024Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173477899Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173490772Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173504183Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173532343Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173546375Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173561659Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173573148Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173585861Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173605244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173618013Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.174999 containerd[1464]: time="2025-01-29T11:15:12.173635346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173647154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173659078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173671588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173683302Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173695084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173719323Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173733725Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173746471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173759588Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173770494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173783906Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173839514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173856650Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175382 containerd[1464]: time="2025-01-29T11:15:12.173866940Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 29 11:15:12.175856 containerd[1464]: time="2025-01-29T11:15:12.174755789Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 29 11:15:12.175856 containerd[1464]: time="2025-01-29T11:15:12.174788437Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 29 11:15:12.175856 containerd[1464]: time="2025-01-29T11:15:12.174869253Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 29 11:15:12.175856 containerd[1464]: time="2025-01-29T11:15:12.174881181Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 29 11:15:12.175856 containerd[1464]: time="2025-01-29T11:15:12.174889795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.175856 containerd[1464]: time="2025-01-29T11:15:12.174904191Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 29 11:15:12.175856 containerd[1464]: time="2025-01-29T11:15:12.174914510Z" level=info msg="NRI interface is disabled by configuration." Jan 29 11:15:12.175856 containerd[1464]: time="2025-01-29T11:15:12.174926623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 29 11:15:12.176230 containerd[1464]: time="2025-01-29T11:15:12.175230587Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 29 11:15:12.176230 containerd[1464]: time="2025-01-29T11:15:12.175281772Z" level=info msg="Connect containerd service" Jan 29 11:15:12.176230 containerd[1464]: time="2025-01-29T11:15:12.175327481Z" level=info msg="using legacy CRI server" Jan 29 11:15:12.176230 containerd[1464]: time="2025-01-29T11:15:12.175336276Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 29 11:15:12.176230 containerd[1464]: time="2025-01-29T11:15:12.175453634Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.179254964Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.179425809Z" level=info msg="Start subscribing containerd event" Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.179489538Z" level=info msg="Start recovering state" Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.179587403Z" level=info msg="Start event monitor" Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.179617400Z" level=info msg="Start snapshots syncer" Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.179630375Z" level=info msg="Start cni network conf syncer for default" Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.179644669Z" level=info msg="Start streaming server" Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.180335536Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 29 11:15:12.180736 containerd[1464]: time="2025-01-29T11:15:12.180392914Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 29 11:15:12.180630 systemd[1]: Started containerd.service - containerd container runtime. Jan 29 11:15:12.186067 containerd[1464]: time="2025-01-29T11:15:12.186004430Z" level=info msg="containerd successfully booted in 0.130516s" Jan 29 11:15:12.188326 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 29 11:15:12.205225 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 29 11:15:12.218337 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 29 11:15:12.220364 systemd[1]: Reached target getty.target - Login Prompts. Jan 29 11:15:12.284235 sshd[1535]: Accepted publickey for core from 139.178.89.65 port 34994 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:12.287089 sshd-session[1535]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:12.307953 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 29 11:15:12.308243 systemd-logind[1447]: New session 1 of user core. Jan 29 11:15:12.319234 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 29 11:15:12.358897 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 29 11:15:12.375307 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 29 11:15:12.383766 (systemd)[1553]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 29 11:15:12.552480 systemd[1553]: Queued start job for default target default.target. Jan 29 11:15:12.563335 systemd[1553]: Created slice app.slice - User Application Slice. Jan 29 11:15:12.563375 systemd[1553]: Reached target paths.target - Paths. Jan 29 11:15:12.563391 systemd[1553]: Reached target timers.target - Timers. Jan 29 11:15:12.566965 systemd[1553]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 29 11:15:12.579269 systemd-networkd[1376]: eth1: Gained IPv6LL Jan 29 11:15:12.579763 systemd-timesyncd[1342]: Network configuration changed, trying to establish connection. Jan 29 11:15:12.586370 systemd[1553]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 29 11:15:12.586558 systemd[1553]: Reached target sockets.target - Sockets. Jan 29 11:15:12.586584 systemd[1553]: Reached target basic.target - Basic System. Jan 29 11:15:12.586651 systemd[1553]: Reached target default.target - Main User Target. Jan 29 11:15:12.586696 systemd[1553]: Startup finished in 191ms. Jan 29 11:15:12.587537 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 29 11:15:12.600122 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 29 11:15:12.678250 systemd[1]: Started sshd@1-143.198.151.197:22-139.178.89.65:34998.service - OpenSSH per-connection server daemon (139.178.89.65:34998). Jan 29 11:15:12.752361 sshd[1564]: Accepted publickey for core from 139.178.89.65 port 34998 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:12.754588 sshd-session[1564]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:12.762565 systemd-logind[1447]: New session 2 of user core. Jan 29 11:15:12.772061 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 29 11:15:12.845868 sshd[1566]: Connection closed by 139.178.89.65 port 34998 Jan 29 11:15:12.848025 sshd-session[1564]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:12.859526 systemd[1]: sshd@1-143.198.151.197:22-139.178.89.65:34998.service: Deactivated successfully. Jan 29 11:15:12.862759 systemd[1]: session-2.scope: Deactivated successfully. Jan 29 11:15:12.865127 systemd-logind[1447]: Session 2 logged out. Waiting for processes to exit. Jan 29 11:15:12.876043 systemd[1]: Started sshd@2-143.198.151.197:22-139.178.89.65:35010.service - OpenSSH per-connection server daemon (139.178.89.65:35010). Jan 29 11:15:12.882863 systemd-logind[1447]: Removed session 2. Jan 29 11:15:12.928303 sshd[1571]: Accepted publickey for core from 139.178.89.65 port 35010 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:12.930290 sshd-session[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:12.938151 systemd-logind[1447]: New session 3 of user core. Jan 29 11:15:12.944076 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 29 11:15:13.021277 sshd[1573]: Connection closed by 139.178.89.65 port 35010 Jan 29 11:15:13.022940 sshd-session[1571]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:13.027467 systemd[1]: sshd@2-143.198.151.197:22-139.178.89.65:35010.service: Deactivated successfully. Jan 29 11:15:13.027692 systemd-logind[1447]: Session 3 logged out. Waiting for processes to exit. Jan 29 11:15:13.031380 systemd[1]: session-3.scope: Deactivated successfully. Jan 29 11:15:13.035428 systemd-logind[1447]: Removed session 3. Jan 29 11:15:13.439220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:15:13.443405 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 29 11:15:13.448682 systemd[1]: Startup finished in 1.216s (kernel) + 7.462s (initrd) + 6.287s (userspace) = 14.966s. Jan 29 11:15:13.452383 (kubelet)[1582]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:15:13.473660 agetty[1549]: failed to open credentials directory Jan 29 11:15:13.473926 agetty[1550]: failed to open credentials directory Jan 29 11:15:14.251041 kubelet[1582]: E0129 11:15:14.250926 1582 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:15:14.254273 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:15:14.254497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:15:14.255104 systemd[1]: kubelet.service: Consumed 1.387s CPU time. Jan 29 11:15:23.042011 systemd[1]: Started sshd@3-143.198.151.197:22-139.178.89.65:53846.service - OpenSSH per-connection server daemon (139.178.89.65:53846). Jan 29 11:15:23.113137 sshd[1594]: Accepted publickey for core from 139.178.89.65 port 53846 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:23.115031 sshd-session[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:23.121208 systemd-logind[1447]: New session 4 of user core. Jan 29 11:15:23.129065 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 29 11:15:23.196143 sshd[1596]: Connection closed by 139.178.89.65 port 53846 Jan 29 11:15:23.196998 sshd-session[1594]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:23.207733 systemd[1]: sshd@3-143.198.151.197:22-139.178.89.65:53846.service: Deactivated successfully. Jan 29 11:15:23.210459 systemd[1]: session-4.scope: Deactivated successfully. Jan 29 11:15:23.213026 systemd-logind[1447]: Session 4 logged out. Waiting for processes to exit. Jan 29 11:15:23.220477 systemd[1]: Started sshd@4-143.198.151.197:22-139.178.89.65:53856.service - OpenSSH per-connection server daemon (139.178.89.65:53856). Jan 29 11:15:23.223097 systemd-logind[1447]: Removed session 4. Jan 29 11:15:23.285745 sshd[1601]: Accepted publickey for core from 139.178.89.65 port 53856 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:23.288910 sshd-session[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:23.297058 systemd-logind[1447]: New session 5 of user core. Jan 29 11:15:23.304009 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 29 11:15:23.364805 sshd[1603]: Connection closed by 139.178.89.65 port 53856 Jan 29 11:15:23.365656 sshd-session[1601]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:23.388989 systemd[1]: sshd@4-143.198.151.197:22-139.178.89.65:53856.service: Deactivated successfully. Jan 29 11:15:23.392669 systemd[1]: session-5.scope: Deactivated successfully. Jan 29 11:15:23.394238 systemd-logind[1447]: Session 5 logged out. Waiting for processes to exit. Jan 29 11:15:23.407322 systemd[1]: Started sshd@5-143.198.151.197:22-139.178.89.65:53870.service - OpenSSH per-connection server daemon (139.178.89.65:53870). Jan 29 11:15:23.408967 systemd-logind[1447]: Removed session 5. Jan 29 11:15:23.468464 sshd[1608]: Accepted publickey for core from 139.178.89.65 port 53870 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:23.470595 sshd-session[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:23.478456 systemd-logind[1447]: New session 6 of user core. Jan 29 11:15:23.485004 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 29 11:15:23.553230 sshd[1610]: Connection closed by 139.178.89.65 port 53870 Jan 29 11:15:23.552989 sshd-session[1608]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:23.569693 systemd[1]: sshd@5-143.198.151.197:22-139.178.89.65:53870.service: Deactivated successfully. Jan 29 11:15:23.573489 systemd[1]: session-6.scope: Deactivated successfully. Jan 29 11:15:23.575590 systemd-logind[1447]: Session 6 logged out. Waiting for processes to exit. Jan 29 11:15:23.592397 systemd[1]: Started sshd@6-143.198.151.197:22-139.178.89.65:53874.service - OpenSSH per-connection server daemon (139.178.89.65:53874). Jan 29 11:15:23.594316 systemd-logind[1447]: Removed session 6. Jan 29 11:15:23.647975 sshd[1615]: Accepted publickey for core from 139.178.89.65 port 53874 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:23.650962 sshd-session[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:23.659004 systemd-logind[1447]: New session 7 of user core. Jan 29 11:15:23.666038 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 29 11:15:23.738091 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 29 11:15:23.738421 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:15:23.755156 sudo[1618]: pam_unix(sudo:session): session closed for user root Jan 29 11:15:23.759552 sshd[1617]: Connection closed by 139.178.89.65 port 53874 Jan 29 11:15:23.760756 sshd-session[1615]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:23.777744 systemd[1]: sshd@6-143.198.151.197:22-139.178.89.65:53874.service: Deactivated successfully. Jan 29 11:15:23.779919 systemd[1]: session-7.scope: Deactivated successfully. Jan 29 11:15:23.783005 systemd-logind[1447]: Session 7 logged out. Waiting for processes to exit. Jan 29 11:15:23.789259 systemd[1]: Started sshd@7-143.198.151.197:22-139.178.89.65:53890.service - OpenSSH per-connection server daemon (139.178.89.65:53890). Jan 29 11:15:23.792466 systemd-logind[1447]: Removed session 7. Jan 29 11:15:23.839546 sshd[1623]: Accepted publickey for core from 139.178.89.65 port 53890 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:23.842615 sshd-session[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:23.849973 systemd-logind[1447]: New session 8 of user core. Jan 29 11:15:23.858184 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 29 11:15:23.922669 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 29 11:15:23.923045 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:15:23.928738 sudo[1627]: pam_unix(sudo:session): session closed for user root Jan 29 11:15:23.937161 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 29 11:15:23.937655 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:15:23.962370 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 29 11:15:24.006043 augenrules[1649]: No rules Jan 29 11:15:24.006890 systemd[1]: audit-rules.service: Deactivated successfully. Jan 29 11:15:24.007113 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 29 11:15:24.008780 sudo[1626]: pam_unix(sudo:session): session closed for user root Jan 29 11:15:24.012612 sshd[1625]: Connection closed by 139.178.89.65 port 53890 Jan 29 11:15:24.013542 sshd-session[1623]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:24.024523 systemd[1]: sshd@7-143.198.151.197:22-139.178.89.65:53890.service: Deactivated successfully. Jan 29 11:15:24.027381 systemd[1]: session-8.scope: Deactivated successfully. Jan 29 11:15:24.030132 systemd-logind[1447]: Session 8 logged out. Waiting for processes to exit. Jan 29 11:15:24.035285 systemd[1]: Started sshd@8-143.198.151.197:22-139.178.89.65:53894.service - OpenSSH per-connection server daemon (139.178.89.65:53894). Jan 29 11:15:24.037621 systemd-logind[1447]: Removed session 8. Jan 29 11:15:24.090403 sshd[1657]: Accepted publickey for core from 139.178.89.65 port 53894 ssh2: RSA SHA256:4pIor37l14fDv6JEMH4o8Oh9qNh/kC4nEi4yJuk4AeI Jan 29 11:15:24.091665 sshd-session[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 29 11:15:24.097808 systemd-logind[1447]: New session 9 of user core. Jan 29 11:15:24.109010 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 29 11:15:24.171612 sudo[1660]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 29 11:15:24.172019 sudo[1660]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 29 11:15:24.504890 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 29 11:15:24.512444 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:15:24.672822 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:15:24.676554 (kubelet)[1682]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 29 11:15:24.743608 kubelet[1682]: E0129 11:15:24.743522 1682 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 29 11:15:24.751360 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 29 11:15:24.751681 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 29 11:15:25.044760 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:15:25.052273 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:15:25.105682 systemd[1]: Reloading requested from client PID 1709 ('systemctl') (unit session-9.scope)... Jan 29 11:15:25.105720 systemd[1]: Reloading... Jan 29 11:15:25.314777 zram_generator::config[1750]: No configuration found. Jan 29 11:15:25.520103 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 29 11:15:25.667375 systemd[1]: Reloading finished in 560 ms. Jan 29 11:15:25.754085 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 29 11:15:25.754215 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 29 11:15:25.755025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:15:25.763314 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 29 11:15:25.957075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 29 11:15:25.959225 (kubelet)[1802]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 29 11:15:26.019764 kubelet[1802]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:15:26.019764 kubelet[1802]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 29 11:15:26.019764 kubelet[1802]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 29 11:15:26.021468 kubelet[1802]: I0129 11:15:26.021218 1802 server.go:206] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 29 11:15:26.443786 kubelet[1802]: I0129 11:15:26.443054 1802 server.go:486] "Kubelet version" kubeletVersion="v1.31.0" Jan 29 11:15:26.444158 kubelet[1802]: I0129 11:15:26.444107 1802 server.go:488] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 29 11:15:26.446263 kubelet[1802]: I0129 11:15:26.446208 1802 server.go:929] "Client rotation is on, will bootstrap in background" Jan 29 11:15:26.479332 kubelet[1802]: I0129 11:15:26.478740 1802 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 29 11:15:26.490413 kubelet[1802]: E0129 11:15:26.490369 1802 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 29 11:15:26.490681 kubelet[1802]: I0129 11:15:26.490664 1802 server.go:1403] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 29 11:15:26.496719 kubelet[1802]: I0129 11:15:26.496664 1802 server.go:744] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 29 11:15:26.498440 kubelet[1802]: I0129 11:15:26.498320 1802 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 29 11:15:26.499299 kubelet[1802]: I0129 11:15:26.498724 1802 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 29 11:15:26.499299 kubelet[1802]: I0129 11:15:26.498767 1802 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"143.198.151.197","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 29 11:15:26.499299 kubelet[1802]: I0129 11:15:26.498961 1802 topology_manager.go:138] "Creating topology manager with none policy" Jan 29 11:15:26.499299 kubelet[1802]: I0129 11:15:26.498972 1802 container_manager_linux.go:300] "Creating device plugin manager" Jan 29 11:15:26.499532 kubelet[1802]: I0129 11:15:26.499117 1802 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:15:26.501291 kubelet[1802]: I0129 11:15:26.501260 1802 kubelet.go:408] "Attempting to sync node with API server" Jan 29 11:15:26.501782 kubelet[1802]: I0129 11:15:26.501415 1802 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 29 11:15:26.501782 kubelet[1802]: I0129 11:15:26.501459 1802 kubelet.go:314] "Adding apiserver pod source" Jan 29 11:15:26.501782 kubelet[1802]: I0129 11:15:26.501475 1802 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 29 11:15:26.508347 kubelet[1802]: E0129 11:15:26.508278 1802 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:26.509998 kubelet[1802]: E0129 11:15:26.508602 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:26.511434 kubelet[1802]: I0129 11:15:26.511145 1802 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Jan 29 11:15:26.516077 kubelet[1802]: I0129 11:15:26.515783 1802 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 29 11:15:26.517765 kubelet[1802]: W0129 11:15:26.516730 1802 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 29 11:15:26.517765 kubelet[1802]: I0129 11:15:26.517543 1802 server.go:1269] "Started kubelet" Jan 29 11:15:26.520212 kubelet[1802]: W0129 11:15:26.519374 1802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "143.198.151.197" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Jan 29 11:15:26.520212 kubelet[1802]: E0129 11:15:26.519432 1802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"143.198.151.197\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 11:15:26.520212 kubelet[1802]: W0129 11:15:26.519588 1802 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Jan 29 11:15:26.520212 kubelet[1802]: E0129 11:15:26.519611 1802 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Jan 29 11:15:26.520212 kubelet[1802]: I0129 11:15:26.519971 1802 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 29 11:15:26.524188 kubelet[1802]: I0129 11:15:26.523326 1802 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 29 11:15:26.525987 kubelet[1802]: I0129 11:15:26.525891 1802 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 29 11:15:26.526482 kubelet[1802]: I0129 11:15:26.526368 1802 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 29 11:15:26.530324 kubelet[1802]: I0129 11:15:26.529682 1802 server.go:460] "Adding debug handlers to kubelet server" Jan 29 11:15:26.531193 kubelet[1802]: I0129 11:15:26.531161 1802 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 29 11:15:26.532321 kubelet[1802]: E0129 11:15:26.532272 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:26.533263 kubelet[1802]: I0129 11:15:26.533220 1802 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 29 11:15:26.534147 kubelet[1802]: I0129 11:15:26.534118 1802 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 29 11:15:26.534937 kubelet[1802]: I0129 11:15:26.534914 1802 reconciler.go:26] "Reconciler: start to sync state" Jan 29 11:15:26.549403 kubelet[1802]: E0129 11:15:26.542048 1802 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.198.151.197.181f259a75a93502 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.198.151.197,UID:143.198.151.197,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:143.198.151.197,},FirstTimestamp:2025-01-29 11:15:26.517515522 +0000 UTC m=+0.549856483,LastTimestamp:2025-01-29 11:15:26.517515522 +0000 UTC m=+0.549856483,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.198.151.197,}" Jan 29 11:15:26.550578 kubelet[1802]: I0129 11:15:26.550460 1802 factory.go:221] Registration of the systemd container factory successfully Jan 29 11:15:26.550760 kubelet[1802]: I0129 11:15:26.550672 1802 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 29 11:15:26.557535 kubelet[1802]: I0129 11:15:26.557442 1802 factory.go:221] Registration of the containerd container factory successfully Jan 29 11:15:26.574483 kubelet[1802]: E0129 11:15:26.573995 1802 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 29 11:15:26.587445 kubelet[1802]: E0129 11:15:26.587399 1802 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"143.198.151.197\" not found" node="143.198.151.197" Jan 29 11:15:26.591593 kubelet[1802]: I0129 11:15:26.591520 1802 cpu_manager.go:214] "Starting CPU manager" policy="none" Jan 29 11:15:26.591870 kubelet[1802]: I0129 11:15:26.591850 1802 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jan 29 11:15:26.592016 kubelet[1802]: I0129 11:15:26.592003 1802 state_mem.go:36] "Initialized new in-memory state store" Jan 29 11:15:26.603750 kubelet[1802]: I0129 11:15:26.603713 1802 policy_none.go:49] "None policy: Start" Jan 29 11:15:26.605503 kubelet[1802]: I0129 11:15:26.605435 1802 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 29 11:15:26.605692 kubelet[1802]: I0129 11:15:26.605679 1802 state_mem.go:35] "Initializing new in-memory state store" Jan 29 11:15:26.626022 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 29 11:15:26.633553 kubelet[1802]: E0129 11:15:26.633469 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:26.641608 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 29 11:15:26.647152 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 29 11:15:26.654451 kubelet[1802]: I0129 11:15:26.654302 1802 manager.go:510] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 29 11:15:26.658778 kubelet[1802]: I0129 11:15:26.658418 1802 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 29 11:15:26.658778 kubelet[1802]: I0129 11:15:26.658460 1802 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 29 11:15:26.659691 kubelet[1802]: I0129 11:15:26.659538 1802 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 29 11:15:26.667165 kubelet[1802]: E0129 11:15:26.667053 1802 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"143.198.151.197\" not found" Jan 29 11:15:26.676947 kubelet[1802]: I0129 11:15:26.676855 1802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 29 11:15:26.679518 kubelet[1802]: I0129 11:15:26.679453 1802 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 29 11:15:26.679679 kubelet[1802]: I0129 11:15:26.679668 1802 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 29 11:15:26.679838 kubelet[1802]: I0129 11:15:26.679823 1802 kubelet.go:2321] "Starting kubelet main sync loop" Jan 29 11:15:26.680580 kubelet[1802]: E0129 11:15:26.680185 1802 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Jan 29 11:15:26.762083 kubelet[1802]: I0129 11:15:26.760827 1802 kubelet_node_status.go:72] "Attempting to register node" node="143.198.151.197" Jan 29 11:15:26.773745 kubelet[1802]: I0129 11:15:26.773614 1802 kubelet_node_status.go:75] "Successfully registered node" node="143.198.151.197" Jan 29 11:15:26.773745 kubelet[1802]: E0129 11:15:26.773655 1802 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"143.198.151.197\": node \"143.198.151.197\" not found" Jan 29 11:15:26.799590 kubelet[1802]: E0129 11:15:26.799471 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:26.900233 kubelet[1802]: E0129 11:15:26.900179 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.000406 kubelet[1802]: E0129 11:15:27.000318 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.012959 sudo[1660]: pam_unix(sudo:session): session closed for user root Jan 29 11:15:27.017831 sshd[1659]: Connection closed by 139.178.89.65 port 53894 Jan 29 11:15:27.018080 sshd-session[1657]: pam_unix(sshd:session): session closed for user core Jan 29 11:15:27.022443 systemd[1]: sshd@8-143.198.151.197:22-139.178.89.65:53894.service: Deactivated successfully. Jan 29 11:15:27.026063 systemd[1]: session-9.scope: Deactivated successfully. Jan 29 11:15:27.029458 systemd-logind[1447]: Session 9 logged out. Waiting for processes to exit. Jan 29 11:15:27.031351 systemd-logind[1447]: Removed session 9. Jan 29 11:15:27.100896 kubelet[1802]: E0129 11:15:27.100831 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.201686 kubelet[1802]: E0129 11:15:27.201599 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.302655 kubelet[1802]: E0129 11:15:27.302425 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.403354 kubelet[1802]: E0129 11:15:27.403285 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.449343 kubelet[1802]: I0129 11:15:27.449229 1802 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 29 11:15:27.449757 kubelet[1802]: W0129 11:15:27.449553 1802 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:15:27.449757 kubelet[1802]: W0129 11:15:27.449654 1802 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 29 11:15:27.503893 kubelet[1802]: E0129 11:15:27.503830 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.509534 kubelet[1802]: E0129 11:15:27.509416 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:27.604970 kubelet[1802]: E0129 11:15:27.604583 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.705190 kubelet[1802]: E0129 11:15:27.705118 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.806085 kubelet[1802]: E0129 11:15:27.806017 1802 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.151.197\" not found" Jan 29 11:15:27.908414 kubelet[1802]: I0129 11:15:27.908226 1802 kuberuntime_manager.go:1633] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Jan 29 11:15:27.909339 containerd[1464]: time="2025-01-29T11:15:27.909228825Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 29 11:15:27.910005 kubelet[1802]: I0129 11:15:27.909931 1802 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Jan 29 11:15:28.509838 kubelet[1802]: E0129 11:15:28.509767 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:28.509838 kubelet[1802]: I0129 11:15:28.509860 1802 apiserver.go:52] "Watching apiserver" Jan 29 11:15:28.522596 kubelet[1802]: E0129 11:15:28.522530 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:28.531605 systemd[1]: Created slice kubepods-besteffort-pod1272de0e_49ef_46c9_bb4e_442a28ed9ed3.slice - libcontainer container kubepods-besteffort-pod1272de0e_49ef_46c9_bb4e_442a28ed9ed3.slice. Jan 29 11:15:28.538817 kubelet[1802]: I0129 11:15:28.538777 1802 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 29 11:15:28.550050 systemd[1]: Created slice kubepods-besteffort-pod906fa76e_f7b7_43cd_a33c_1b0e711ee459.slice - libcontainer container kubepods-besteffort-pod906fa76e_f7b7_43cd_a33c_1b0e711ee459.slice. Jan 29 11:15:28.552242 kubelet[1802]: I0129 11:15:28.551151 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-policysync\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552242 kubelet[1802]: I0129 11:15:28.551206 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/906fa76e-f7b7-43cd-a33c-1b0e711ee459-tigera-ca-bundle\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552242 kubelet[1802]: I0129 11:15:28.551230 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-flexvol-driver-host\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552242 kubelet[1802]: I0129 11:15:28.551251 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vclk2\" (UniqueName: \"kubernetes.io/projected/17de0467-9566-43ce-a406-ef6b976cb6c5-kube-api-access-vclk2\") pod \"csi-node-driver-dx4jg\" (UID: \"17de0467-9566-43ce-a406-ef6b976cb6c5\") " pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:28.552242 kubelet[1802]: I0129 11:15:28.551268 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/17de0467-9566-43ce-a406-ef6b976cb6c5-socket-dir\") pod \"csi-node-driver-dx4jg\" (UID: \"17de0467-9566-43ce-a406-ef6b976cb6c5\") " pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:28.552565 kubelet[1802]: I0129 11:15:28.551290 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-xtables-lock\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552565 kubelet[1802]: I0129 11:15:28.551312 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/906fa76e-f7b7-43cd-a33c-1b0e711ee459-node-certs\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552565 kubelet[1802]: I0129 11:15:28.551349 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-var-lib-calico\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552565 kubelet[1802]: I0129 11:15:28.551379 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-log-dir\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552565 kubelet[1802]: I0129 11:15:28.551400 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j9vfh\" (UniqueName: \"kubernetes.io/projected/906fa76e-f7b7-43cd-a33c-1b0e711ee459-kube-api-access-j9vfh\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552749 kubelet[1802]: I0129 11:15:28.551419 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/17de0467-9566-43ce-a406-ef6b976cb6c5-varrun\") pod \"csi-node-driver-dx4jg\" (UID: \"17de0467-9566-43ce-a406-ef6b976cb6c5\") " pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:28.552749 kubelet[1802]: I0129 11:15:28.551439 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/17de0467-9566-43ce-a406-ef6b976cb6c5-kubelet-dir\") pod \"csi-node-driver-dx4jg\" (UID: \"17de0467-9566-43ce-a406-ef6b976cb6c5\") " pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:28.552749 kubelet[1802]: I0129 11:15:28.551462 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1272de0e-49ef-46c9-bb4e-442a28ed9ed3-xtables-lock\") pod \"kube-proxy-x66fz\" (UID: \"1272de0e-49ef-46c9-bb4e-442a28ed9ed3\") " pod="kube-system/kube-proxy-x66fz" Jan 29 11:15:28.552749 kubelet[1802]: I0129 11:15:28.551480 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kmqh\" (UniqueName: \"kubernetes.io/projected/1272de0e-49ef-46c9-bb4e-442a28ed9ed3-kube-api-access-5kmqh\") pod \"kube-proxy-x66fz\" (UID: \"1272de0e-49ef-46c9-bb4e-442a28ed9ed3\") " pod="kube-system/kube-proxy-x66fz" Jan 29 11:15:28.552749 kubelet[1802]: I0129 11:15:28.551499 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-lib-modules\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552869 kubelet[1802]: I0129 11:15:28.551520 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-var-run-calico\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552869 kubelet[1802]: I0129 11:15:28.551591 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-bin-dir\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552869 kubelet[1802]: I0129 11:15:28.551620 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-net-dir\") pod \"calico-node-zppq6\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " pod="calico-system/calico-node-zppq6" Jan 29 11:15:28.552869 kubelet[1802]: I0129 11:15:28.551690 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/17de0467-9566-43ce-a406-ef6b976cb6c5-registration-dir\") pod \"csi-node-driver-dx4jg\" (UID: \"17de0467-9566-43ce-a406-ef6b976cb6c5\") " pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:28.552869 kubelet[1802]: I0129 11:15:28.551730 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1272de0e-49ef-46c9-bb4e-442a28ed9ed3-kube-proxy\") pod \"kube-proxy-x66fz\" (UID: \"1272de0e-49ef-46c9-bb4e-442a28ed9ed3\") " pod="kube-system/kube-proxy-x66fz" Jan 29 11:15:28.553009 kubelet[1802]: I0129 11:15:28.551798 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1272de0e-49ef-46c9-bb4e-442a28ed9ed3-lib-modules\") pod \"kube-proxy-x66fz\" (UID: \"1272de0e-49ef-46c9-bb4e-442a28ed9ed3\") " pod="kube-system/kube-proxy-x66fz" Jan 29 11:15:28.664514 kubelet[1802]: E0129 11:15:28.664269 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:28.664514 kubelet[1802]: W0129 11:15:28.664310 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:28.664514 kubelet[1802]: E0129 11:15:28.664359 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:28.684567 kubelet[1802]: E0129 11:15:28.682834 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:28.684567 kubelet[1802]: W0129 11:15:28.682876 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:28.684567 kubelet[1802]: E0129 11:15:28.683018 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:28.702117 kubelet[1802]: E0129 11:15:28.702079 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:28.702316 kubelet[1802]: W0129 11:15:28.702295 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:28.702422 kubelet[1802]: E0129 11:15:28.702407 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:28.702854 kubelet[1802]: E0129 11:15:28.702836 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:28.702964 kubelet[1802]: W0129 11:15:28.702949 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:28.703040 kubelet[1802]: E0129 11:15:28.703027 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:28.845734 kubelet[1802]: E0129 11:15:28.845515 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:28.847496 containerd[1464]: time="2025-01-29T11:15:28.847316439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x66fz,Uid:1272de0e-49ef-46c9-bb4e-442a28ed9ed3,Namespace:kube-system,Attempt:0,}" Jan 29 11:15:28.857595 kubelet[1802]: E0129 11:15:28.857234 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:28.858393 containerd[1464]: time="2025-01-29T11:15:28.858024969Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zppq6,Uid:906fa76e-f7b7-43cd-a33c-1b0e711ee459,Namespace:calico-system,Attempt:0,}" Jan 29 11:15:29.510311 kubelet[1802]: E0129 11:15:29.510226 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:29.516743 containerd[1464]: time="2025-01-29T11:15:29.516508460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:15:29.520918 containerd[1464]: time="2025-01-29T11:15:29.520824356Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Jan 29 11:15:29.527744 containerd[1464]: time="2025-01-29T11:15:29.527539134Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:15:29.530073 containerd[1464]: time="2025-01-29T11:15:29.529991607Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:15:29.533526 containerd[1464]: time="2025-01-29T11:15:29.533434323Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 29 11:15:29.535262 containerd[1464]: time="2025-01-29T11:15:29.535149672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 29 11:15:29.537133 containerd[1464]: time="2025-01-29T11:15:29.536809645Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 689.242184ms" Jan 29 11:15:29.545784 containerd[1464]: time="2025-01-29T11:15:29.545693475Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 687.556837ms" Jan 29 11:15:29.678652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount183123551.mount: Deactivated successfully. Jan 29 11:15:29.681765 kubelet[1802]: E0129 11:15:29.681518 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:29.809310 containerd[1464]: time="2025-01-29T11:15:29.808195387Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:29.809310 containerd[1464]: time="2025-01-29T11:15:29.808280464Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:29.809310 containerd[1464]: time="2025-01-29T11:15:29.808298388Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:29.809310 containerd[1464]: time="2025-01-29T11:15:29.808398123Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:29.812714 containerd[1464]: time="2025-01-29T11:15:29.812240437Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:29.812714 containerd[1464]: time="2025-01-29T11:15:29.812325782Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:29.812714 containerd[1464]: time="2025-01-29T11:15:29.812348409Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:29.812714 containerd[1464]: time="2025-01-29T11:15:29.812479194Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:29.942090 systemd[1]: Started cri-containerd-4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71.scope - libcontainer container 4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71. Jan 29 11:15:29.947191 systemd[1]: Started cri-containerd-e284ae9c966ddfb67c481e8b16909328a4d9690e0607df0873b88a3e15e0fae8.scope - libcontainer container e284ae9c966ddfb67c481e8b16909328a4d9690e0607df0873b88a3e15e0fae8. Jan 29 11:15:30.011005 containerd[1464]: time="2025-01-29T11:15:30.010815377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-x66fz,Uid:1272de0e-49ef-46c9-bb4e-442a28ed9ed3,Namespace:kube-system,Attempt:0,} returns sandbox id \"e284ae9c966ddfb67c481e8b16909328a4d9690e0607df0873b88a3e15e0fae8\"" Jan 29 11:15:30.011462 containerd[1464]: time="2025-01-29T11:15:30.011416606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zppq6,Uid:906fa76e-f7b7-43cd-a33c-1b0e711ee459,Namespace:calico-system,Attempt:0,} returns sandbox id \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\"" Jan 29 11:15:30.014353 kubelet[1802]: E0129 11:15:30.014250 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:30.014990 kubelet[1802]: E0129 11:15:30.014689 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:30.017355 containerd[1464]: time="2025-01-29T11:15:30.017125517Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\"" Jan 29 11:15:30.511302 kubelet[1802]: E0129 11:15:30.511225 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:31.308746 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767520381.mount: Deactivated successfully. Jan 29 11:15:31.512118 kubelet[1802]: E0129 11:15:31.511980 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:31.680927 kubelet[1802]: E0129 11:15:31.680319 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:31.984087 containerd[1464]: time="2025-01-29T11:15:31.983185867Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:31.984882 containerd[1464]: time="2025-01-29T11:15:31.984810152Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.5: active requests=0, bytes read=30231128" Jan 29 11:15:31.985406 containerd[1464]: time="2025-01-29T11:15:31.985371446Z" level=info msg="ImageCreate event name:\"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:31.989286 containerd[1464]: time="2025-01-29T11:15:31.989208567Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:31.990343 containerd[1464]: time="2025-01-29T11:15:31.990305461Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.5\" with image id \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\", repo tag \"registry.k8s.io/kube-proxy:v1.31.5\", repo digest \"registry.k8s.io/kube-proxy@sha256:c00685cc45c1fb539c5bbd8d24d2577f96e9399efac1670f688f654b30f8c64c\", size \"30230147\" in 1.97311494s" Jan 29 11:15:31.990343 containerd[1464]: time="2025-01-29T11:15:31.990377615Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.5\" returns image reference \"sha256:34018aef09a62f8b40bdd1d2e1bf6c48f359cab492d51059a09e20745ab02ce2\"" Jan 29 11:15:31.993398 containerd[1464]: time="2025-01-29T11:15:31.992957498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Jan 29 11:15:31.994999 containerd[1464]: time="2025-01-29T11:15:31.994963809Z" level=info msg="CreateContainer within sandbox \"e284ae9c966ddfb67c481e8b16909328a4d9690e0607df0873b88a3e15e0fae8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 29 11:15:32.026995 containerd[1464]: time="2025-01-29T11:15:32.026881763Z" level=info msg="CreateContainer within sandbox \"e284ae9c966ddfb67c481e8b16909328a4d9690e0607df0873b88a3e15e0fae8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"eda13cfadc75d977128ed951994ad1ff5377ebaf2a6ce7cb16cdc9d2d350279e\"" Jan 29 11:15:32.028156 containerd[1464]: time="2025-01-29T11:15:32.028069523Z" level=info msg="StartContainer for \"eda13cfadc75d977128ed951994ad1ff5377ebaf2a6ce7cb16cdc9d2d350279e\"" Jan 29 11:15:32.082028 systemd[1]: Started cri-containerd-eda13cfadc75d977128ed951994ad1ff5377ebaf2a6ce7cb16cdc9d2d350279e.scope - libcontainer container eda13cfadc75d977128ed951994ad1ff5377ebaf2a6ce7cb16cdc9d2d350279e. Jan 29 11:15:32.138656 containerd[1464]: time="2025-01-29T11:15:32.138381960Z" level=info msg="StartContainer for \"eda13cfadc75d977128ed951994ad1ff5377ebaf2a6ce7cb16cdc9d2d350279e\" returns successfully" Jan 29 11:15:32.512913 kubelet[1802]: E0129 11:15:32.512847 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:32.732892 kubelet[1802]: E0129 11:15:32.732816 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:32.749190 kubelet[1802]: I0129 11:15:32.749097 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-x66fz" podStartSLOduration=3.773593157 podStartE2EDuration="5.749071684s" podCreationTimestamp="2025-01-29 11:15:27 +0000 UTC" firstStartedPulling="2025-01-29 11:15:30.016356385 +0000 UTC m=+4.048697323" lastFinishedPulling="2025-01-29 11:15:31.991834888 +0000 UTC m=+6.024175850" observedRunningTime="2025-01-29 11:15:32.748571175 +0000 UTC m=+6.780912137" watchObservedRunningTime="2025-01-29 11:15:32.749071684 +0000 UTC m=+6.781412644" Jan 29 11:15:32.780394 kubelet[1802]: E0129 11:15:32.780252 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.780394 kubelet[1802]: W0129 11:15:32.780286 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.780394 kubelet[1802]: E0129 11:15:32.780315 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.780924 kubelet[1802]: E0129 11:15:32.780892 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.780924 kubelet[1802]: W0129 11:15:32.780914 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.781096 kubelet[1802]: E0129 11:15:32.780951 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.781542 kubelet[1802]: E0129 11:15:32.781520 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.781542 kubelet[1802]: W0129 11:15:32.781538 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.781674 kubelet[1802]: E0129 11:15:32.781556 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.781904 kubelet[1802]: E0129 11:15:32.781880 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.782036 kubelet[1802]: W0129 11:15:32.781911 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.782036 kubelet[1802]: E0129 11:15:32.781928 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.782269 kubelet[1802]: E0129 11:15:32.782252 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.782337 kubelet[1802]: W0129 11:15:32.782268 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.782337 kubelet[1802]: E0129 11:15:32.782299 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.782673 kubelet[1802]: E0129 11:15:32.782642 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.782673 kubelet[1802]: W0129 11:15:32.782671 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.782871 kubelet[1802]: E0129 11:15:32.782686 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.783014 kubelet[1802]: E0129 11:15:32.782998 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.783062 kubelet[1802]: W0129 11:15:32.783013 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.783104 kubelet[1802]: E0129 11:15:32.783067 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.783346 kubelet[1802]: E0129 11:15:32.783328 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.783346 kubelet[1802]: W0129 11:15:32.783343 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.783441 kubelet[1802]: E0129 11:15:32.783356 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.783657 kubelet[1802]: E0129 11:15:32.783640 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.783657 kubelet[1802]: W0129 11:15:32.783655 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.783777 kubelet[1802]: E0129 11:15:32.783694 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.784129 kubelet[1802]: E0129 11:15:32.784107 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.784218 kubelet[1802]: W0129 11:15:32.784142 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.784218 kubelet[1802]: E0129 11:15:32.784158 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.784481 kubelet[1802]: E0129 11:15:32.784466 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.784523 kubelet[1802]: W0129 11:15:32.784482 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.784523 kubelet[1802]: E0129 11:15:32.784510 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.784836 kubelet[1802]: E0129 11:15:32.784818 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.784893 kubelet[1802]: W0129 11:15:32.784834 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.784893 kubelet[1802]: E0129 11:15:32.784857 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.785158 kubelet[1802]: E0129 11:15:32.785142 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.785210 kubelet[1802]: W0129 11:15:32.785159 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.785210 kubelet[1802]: E0129 11:15:32.785175 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.785439 kubelet[1802]: E0129 11:15:32.785422 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.785439 kubelet[1802]: W0129 11:15:32.785436 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.785556 kubelet[1802]: E0129 11:15:32.785450 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.785730 kubelet[1802]: E0129 11:15:32.785665 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.785820 kubelet[1802]: W0129 11:15:32.785775 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.785820 kubelet[1802]: E0129 11:15:32.785793 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.786045 kubelet[1802]: E0129 11:15:32.786030 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.786045 kubelet[1802]: W0129 11:15:32.786044 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.786124 kubelet[1802]: E0129 11:15:32.786057 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.786287 kubelet[1802]: E0129 11:15:32.786274 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.786318 kubelet[1802]: W0129 11:15:32.786288 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.786318 kubelet[1802]: E0129 11:15:32.786301 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.786545 kubelet[1802]: E0129 11:15:32.786530 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.786545 kubelet[1802]: W0129 11:15:32.786542 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.786618 kubelet[1802]: E0129 11:15:32.786556 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.786867 kubelet[1802]: E0129 11:15:32.786834 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.786867 kubelet[1802]: W0129 11:15:32.786847 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.786867 kubelet[1802]: E0129 11:15:32.786860 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.787100 kubelet[1802]: E0129 11:15:32.787086 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.787152 kubelet[1802]: W0129 11:15:32.787102 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.787152 kubelet[1802]: E0129 11:15:32.787115 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.881341 kubelet[1802]: E0129 11:15:32.881103 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.881341 kubelet[1802]: W0129 11:15:32.881139 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.881341 kubelet[1802]: E0129 11:15:32.881163 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.882161 kubelet[1802]: E0129 11:15:32.881864 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.882161 kubelet[1802]: W0129 11:15:32.881885 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.882161 kubelet[1802]: E0129 11:15:32.881906 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.882532 kubelet[1802]: E0129 11:15:32.882418 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.882532 kubelet[1802]: W0129 11:15:32.882432 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.882532 kubelet[1802]: E0129 11:15:32.882445 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.882737 kubelet[1802]: E0129 11:15:32.882722 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.882828 kubelet[1802]: W0129 11:15:32.882813 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.883024 kubelet[1802]: E0129 11:15:32.882914 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.883315 kubelet[1802]: E0129 11:15:32.883213 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.883315 kubelet[1802]: W0129 11:15:32.883225 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.883315 kubelet[1802]: E0129 11:15:32.883262 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.883660 kubelet[1802]: E0129 11:15:32.883534 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.883660 kubelet[1802]: W0129 11:15:32.883550 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.883660 kubelet[1802]: E0129 11:15:32.883574 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.884247 kubelet[1802]: E0129 11:15:32.884085 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.884247 kubelet[1802]: W0129 11:15:32.884115 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.884247 kubelet[1802]: E0129 11:15:32.884134 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.884412 kubelet[1802]: E0129 11:15:32.884401 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.884465 kubelet[1802]: W0129 11:15:32.884456 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.884518 kubelet[1802]: E0129 11:15:32.884509 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.884975 kubelet[1802]: E0129 11:15:32.884959 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.885187 kubelet[1802]: W0129 11:15:32.885070 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.885187 kubelet[1802]: E0129 11:15:32.885110 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.885613 kubelet[1802]: E0129 11:15:32.885366 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.885613 kubelet[1802]: W0129 11:15:32.885377 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.885613 kubelet[1802]: E0129 11:15:32.885388 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.885749 kubelet[1802]: E0129 11:15:32.885731 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.885831 kubelet[1802]: W0129 11:15:32.885751 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.885866 kubelet[1802]: E0129 11:15:32.885848 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:32.886160 kubelet[1802]: E0129 11:15:32.886143 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:32.886160 kubelet[1802]: W0129 11:15:32.886156 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:32.886238 kubelet[1802]: E0129 11:15:32.886168 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.513874 kubelet[1802]: E0129 11:15:33.513822 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:33.681176 kubelet[1802]: E0129 11:15:33.681055 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:33.734438 kubelet[1802]: E0129 11:15:33.734376 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:33.796735 kubelet[1802]: E0129 11:15:33.796514 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.796735 kubelet[1802]: W0129 11:15:33.796591 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.796735 kubelet[1802]: E0129 11:15:33.796626 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.796996 kubelet[1802]: E0129 11:15:33.796971 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.796996 kubelet[1802]: W0129 11:15:33.796985 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.797051 kubelet[1802]: E0129 11:15:33.797007 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.797979 kubelet[1802]: E0129 11:15:33.797235 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.797979 kubelet[1802]: W0129 11:15:33.797252 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.797979 kubelet[1802]: E0129 11:15:33.797263 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.797979 kubelet[1802]: E0129 11:15:33.797477 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.797979 kubelet[1802]: W0129 11:15:33.797484 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.797979 kubelet[1802]: E0129 11:15:33.797493 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.797979 kubelet[1802]: E0129 11:15:33.797755 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.797979 kubelet[1802]: W0129 11:15:33.797765 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.797979 kubelet[1802]: E0129 11:15:33.797784 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.798332 kubelet[1802]: E0129 11:15:33.798004 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.798332 kubelet[1802]: W0129 11:15:33.798013 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.798332 kubelet[1802]: E0129 11:15:33.798022 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.798332 kubelet[1802]: E0129 11:15:33.798212 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.798332 kubelet[1802]: W0129 11:15:33.798220 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.798332 kubelet[1802]: E0129 11:15:33.798228 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.798514 kubelet[1802]: E0129 11:15:33.798461 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.798514 kubelet[1802]: W0129 11:15:33.798469 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.798514 kubelet[1802]: E0129 11:15:33.798479 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.798687 kubelet[1802]: E0129 11:15:33.798664 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.798687 kubelet[1802]: W0129 11:15:33.798680 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.798687 kubelet[1802]: E0129 11:15:33.798688 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.798889 kubelet[1802]: E0129 11:15:33.798870 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.798889 kubelet[1802]: W0129 11:15:33.798882 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.798889 kubelet[1802]: E0129 11:15:33.798890 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.799048 kubelet[1802]: E0129 11:15:33.799036 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.799048 kubelet[1802]: W0129 11:15:33.799046 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.799133 kubelet[1802]: E0129 11:15:33.799053 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.799260 kubelet[1802]: E0129 11:15:33.799243 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.799260 kubelet[1802]: W0129 11:15:33.799256 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.799332 kubelet[1802]: E0129 11:15:33.799267 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.799468 kubelet[1802]: E0129 11:15:33.799452 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.799468 kubelet[1802]: W0129 11:15:33.799465 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.799556 kubelet[1802]: E0129 11:15:33.799473 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.799688 kubelet[1802]: E0129 11:15:33.799646 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.799688 kubelet[1802]: W0129 11:15:33.799662 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.799688 kubelet[1802]: E0129 11:15:33.799671 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.800001 kubelet[1802]: E0129 11:15:33.799975 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.800001 kubelet[1802]: W0129 11:15:33.799986 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.800001 kubelet[1802]: E0129 11:15:33.799996 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.800210 kubelet[1802]: E0129 11:15:33.800184 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.800210 kubelet[1802]: W0129 11:15:33.800198 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.800210 kubelet[1802]: E0129 11:15:33.800208 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.800459 kubelet[1802]: E0129 11:15:33.800443 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.800459 kubelet[1802]: W0129 11:15:33.800455 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.800511 kubelet[1802]: E0129 11:15:33.800464 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.800644 kubelet[1802]: E0129 11:15:33.800627 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.800644 kubelet[1802]: W0129 11:15:33.800641 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.801906 kubelet[1802]: E0129 11:15:33.800651 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.801906 kubelet[1802]: E0129 11:15:33.800827 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.801906 kubelet[1802]: W0129 11:15:33.800835 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.801906 kubelet[1802]: E0129 11:15:33.800843 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.801906 kubelet[1802]: E0129 11:15:33.801040 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.801906 kubelet[1802]: W0129 11:15:33.801048 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.801906 kubelet[1802]: E0129 11:15:33.801056 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.890166 kubelet[1802]: E0129 11:15:33.889939 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.890166 kubelet[1802]: W0129 11:15:33.889971 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.890166 kubelet[1802]: E0129 11:15:33.889997 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.890768 kubelet[1802]: E0129 11:15:33.890395 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.890768 kubelet[1802]: W0129 11:15:33.890411 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.890768 kubelet[1802]: E0129 11:15:33.890453 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.890954 kubelet[1802]: E0129 11:15:33.890941 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.891049 kubelet[1802]: W0129 11:15:33.891015 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.891169 kubelet[1802]: E0129 11:15:33.891108 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.891532 kubelet[1802]: E0129 11:15:33.891434 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.891532 kubelet[1802]: W0129 11:15:33.891448 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.891661 kubelet[1802]: E0129 11:15:33.891618 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.891992 kubelet[1802]: E0129 11:15:33.891978 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.892110 kubelet[1802]: W0129 11:15:33.892060 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.892110 kubelet[1802]: E0129 11:15:33.892085 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.892435 kubelet[1802]: E0129 11:15:33.892397 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.892435 kubelet[1802]: W0129 11:15:33.892419 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.892435 kubelet[1802]: E0129 11:15:33.892444 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.892778 kubelet[1802]: E0129 11:15:33.892751 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.892778 kubelet[1802]: W0129 11:15:33.892768 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.892856 kubelet[1802]: E0129 11:15:33.892790 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.893839 kubelet[1802]: E0129 11:15:33.893697 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.893839 kubelet[1802]: W0129 11:15:33.893791 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.893839 kubelet[1802]: E0129 11:15:33.893813 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.894091 kubelet[1802]: E0129 11:15:33.894074 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.894136 kubelet[1802]: W0129 11:15:33.894093 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.894136 kubelet[1802]: E0129 11:15:33.894116 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.894576 kubelet[1802]: E0129 11:15:33.894558 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.894576 kubelet[1802]: W0129 11:15:33.894573 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.894757 kubelet[1802]: E0129 11:15:33.894589 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.894884 kubelet[1802]: E0129 11:15:33.894870 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.894884 kubelet[1802]: W0129 11:15:33.894883 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.895357 kubelet[1802]: E0129 11:15:33.894899 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:33.895357 kubelet[1802]: E0129 11:15:33.895305 1802 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 29 11:15:33.895357 kubelet[1802]: W0129 11:15:33.895316 1802 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 29 11:15:33.895357 kubelet[1802]: E0129 11:15:33.895326 1802 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 29 11:15:34.374406 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515360543.mount: Deactivated successfully. Jan 29 11:15:34.515394 kubelet[1802]: E0129 11:15:34.514390 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:34.631996 containerd[1464]: time="2025-01-29T11:15:34.631040868Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:34.632877 containerd[1464]: time="2025-01-29T11:15:34.632782099Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Jan 29 11:15:34.634685 containerd[1464]: time="2025-01-29T11:15:34.634617548Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:34.637869 containerd[1464]: time="2025-01-29T11:15:34.637805518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:34.639691 containerd[1464]: time="2025-01-29T11:15:34.639566502Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 2.646566772s" Jan 29 11:15:34.639691 containerd[1464]: time="2025-01-29T11:15:34.639624897Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Jan 29 11:15:34.643146 containerd[1464]: time="2025-01-29T11:15:34.642883900Z" level=info msg="CreateContainer within sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:15:34.665974 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3885711526.mount: Deactivated successfully. Jan 29 11:15:34.673438 containerd[1464]: time="2025-01-29T11:15:34.673224950Z" level=info msg="CreateContainer within sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\"" Jan 29 11:15:34.674281 containerd[1464]: time="2025-01-29T11:15:34.674045555Z" level=info msg="StartContainer for \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\"" Jan 29 11:15:34.722974 systemd[1]: Started cri-containerd-87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b.scope - libcontainer container 87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b. Jan 29 11:15:34.770075 containerd[1464]: time="2025-01-29T11:15:34.769323697Z" level=info msg="StartContainer for \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\" returns successfully" Jan 29 11:15:34.784667 systemd[1]: cri-containerd-87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b.scope: Deactivated successfully. Jan 29 11:15:34.925643 containerd[1464]: time="2025-01-29T11:15:34.925284844Z" level=info msg="shim disconnected" id=87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b namespace=k8s.io Jan 29 11:15:34.925643 containerd[1464]: time="2025-01-29T11:15:34.925351116Z" level=warning msg="cleaning up after shim disconnected" id=87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b namespace=k8s.io Jan 29 11:15:34.925643 containerd[1464]: time="2025-01-29T11:15:34.925361220Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:35.044194 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Jan 29 11:15:35.310364 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b-rootfs.mount: Deactivated successfully. Jan 29 11:15:35.515029 kubelet[1802]: E0129 11:15:35.514963 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:35.680462 kubelet[1802]: E0129 11:15:35.680292 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:35.741921 kubelet[1802]: E0129 11:15:35.741758 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:35.742671 containerd[1464]: time="2025-01-29T11:15:35.742627957Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Jan 29 11:15:36.515919 kubelet[1802]: E0129 11:15:36.515851 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:37.516859 kubelet[1802]: E0129 11:15:37.516792 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:37.681479 kubelet[1802]: E0129 11:15:37.681261 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:38.517516 kubelet[1802]: E0129 11:15:38.517444 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:39.517843 kubelet[1802]: E0129 11:15:39.517794 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:39.680615 kubelet[1802]: E0129 11:15:39.680548 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:40.518297 kubelet[1802]: E0129 11:15:40.518247 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:41.295963 containerd[1464]: time="2025-01-29T11:15:41.295813548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:41.297656 containerd[1464]: time="2025-01-29T11:15:41.297334519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Jan 29 11:15:41.299670 containerd[1464]: time="2025-01-29T11:15:41.298446801Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:41.303379 containerd[1464]: time="2025-01-29T11:15:41.302908952Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:41.304115 containerd[1464]: time="2025-01-29T11:15:41.304048457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 5.561377191s" Jan 29 11:15:41.304284 containerd[1464]: time="2025-01-29T11:15:41.304122261Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Jan 29 11:15:41.309160 containerd[1464]: time="2025-01-29T11:15:41.308804055Z" level=info msg="CreateContainer within sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:15:41.331104 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount55246361.mount: Deactivated successfully. Jan 29 11:15:41.337186 containerd[1464]: time="2025-01-29T11:15:41.337109508Z" level=info msg="CreateContainer within sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\"" Jan 29 11:15:41.338170 containerd[1464]: time="2025-01-29T11:15:41.338134766Z" level=info msg="StartContainer for \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\"" Jan 29 11:15:41.389353 systemd[1]: run-containerd-runc-k8s.io-a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4-runc.AHQsZv.mount: Deactivated successfully. Jan 29 11:15:41.400989 systemd[1]: Started cri-containerd-a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4.scope - libcontainer container a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4. Jan 29 11:15:41.456869 containerd[1464]: time="2025-01-29T11:15:41.456813942Z" level=info msg="StartContainer for \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\" returns successfully" Jan 29 11:15:41.519217 kubelet[1802]: E0129 11:15:41.519127 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:41.681622 kubelet[1802]: E0129 11:15:41.680904 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:41.764458 kubelet[1802]: E0129 11:15:41.763998 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:42.224472 containerd[1464]: time="2025-01-29T11:15:42.224405858Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 29 11:15:42.228934 systemd[1]: cri-containerd-a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4.scope: Deactivated successfully. Jan 29 11:15:42.260540 kubelet[1802]: I0129 11:15:42.260503 1802 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Jan 29 11:15:42.325411 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4-rootfs.mount: Deactivated successfully. Jan 29 11:15:42.419295 containerd[1464]: time="2025-01-29T11:15:42.419230653Z" level=info msg="shim disconnected" id=a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4 namespace=k8s.io Jan 29 11:15:42.419295 containerd[1464]: time="2025-01-29T11:15:42.419287485Z" level=warning msg="cleaning up after shim disconnected" id=a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4 namespace=k8s.io Jan 29 11:15:42.419295 containerd[1464]: time="2025-01-29T11:15:42.419299510Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:42.519738 kubelet[1802]: E0129 11:15:42.519642 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:42.734206 systemd-resolved[1329]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. Jan 29 11:15:42.768632 kubelet[1802]: E0129 11:15:42.768576 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:42.769999 containerd[1464]: time="2025-01-29T11:15:42.769869468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Jan 29 11:15:43.509602 systemd-timesyncd[1342]: Contacted time server 12.71.198.242:123 (2.flatcar.pool.ntp.org). Jan 29 11:15:43.509761 systemd-timesyncd[1342]: Initial clock synchronization to Wed 2025-01-29 11:15:43.509126 UTC. Jan 29 11:15:43.510247 systemd-resolved[1329]: Clock change detected. Flushing caches. Jan 29 11:15:44.094876 kubelet[1802]: E0129 11:15:44.094812 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:44.261648 systemd[1]: Created slice kubepods-besteffort-pod17de0467_9566_43ce_a406_ef6b976cb6c5.slice - libcontainer container kubepods-besteffort-pod17de0467_9566_43ce_a406_ef6b976cb6c5.slice. Jan 29 11:15:44.266031 containerd[1464]: time="2025-01-29T11:15:44.265530090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:0,}" Jan 29 11:15:44.366509 containerd[1464]: time="2025-01-29T11:15:44.364373480Z" level=error msg="Failed to destroy network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:44.366689 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c-shm.mount: Deactivated successfully. Jan 29 11:15:44.367402 containerd[1464]: time="2025-01-29T11:15:44.367021043Z" level=error msg="encountered an error cleaning up failed sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:44.367402 containerd[1464]: time="2025-01-29T11:15:44.367134212Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:44.367489 kubelet[1802]: E0129 11:15:44.367391 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:44.367489 kubelet[1802]: E0129 11:15:44.367459 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:44.367489 kubelet[1802]: E0129 11:15:44.367479 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:44.367595 kubelet[1802]: E0129 11:15:44.367537 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:45.095984 kubelet[1802]: E0129 11:15:45.095902 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:45.350418 kubelet[1802]: I0129 11:15:45.349617 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c" Jan 29 11:15:45.353920 containerd[1464]: time="2025-01-29T11:15:45.350925821Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:15:45.353920 containerd[1464]: time="2025-01-29T11:15:45.351231648Z" level=info msg="Ensure that sandbox 7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c in task-service has been cleanup successfully" Jan 29 11:15:45.354560 containerd[1464]: time="2025-01-29T11:15:45.354516097Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:15:45.354655 containerd[1464]: time="2025-01-29T11:15:45.354641424Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:15:45.355783 systemd[1]: run-netns-cni\x2d79016338\x2d2f2f\x2da953\x2d5169\x2d435e1d9628c1.mount: Deactivated successfully. Jan 29 11:15:45.358503 containerd[1464]: time="2025-01-29T11:15:45.356763143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:1,}" Jan 29 11:15:45.431313 systemd[1]: Created slice kubepods-besteffort-pod32d19bf8_e39c_40fe_82db_979ee14e9bd5.slice - libcontainer container kubepods-besteffort-pod32d19bf8_e39c_40fe_82db_979ee14e9bd5.slice. Jan 29 11:15:45.506289 containerd[1464]: time="2025-01-29T11:15:45.506219121Z" level=error msg="Failed to destroy network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:45.506982 containerd[1464]: time="2025-01-29T11:15:45.506687496Z" level=error msg="encountered an error cleaning up failed sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:45.506982 containerd[1464]: time="2025-01-29T11:15:45.506797386Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:45.507946 kubelet[1802]: E0129 11:15:45.507083 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:45.507946 kubelet[1802]: E0129 11:15:45.507154 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:45.507946 kubelet[1802]: E0129 11:15:45.507186 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:45.508134 kubelet[1802]: E0129 11:15:45.507241 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:45.511101 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b-shm.mount: Deactivated successfully. Jan 29 11:15:45.536661 kubelet[1802]: I0129 11:15:45.536546 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kxmx\" (UniqueName: \"kubernetes.io/projected/32d19bf8-e39c-40fe-82db-979ee14e9bd5-kube-api-access-5kxmx\") pod \"nginx-deployment-8587fbcb89-hgrqg\" (UID: \"32d19bf8-e39c-40fe-82db-979ee14e9bd5\") " pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:45.738834 containerd[1464]: time="2025-01-29T11:15:45.737131089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:0,}" Jan 29 11:15:45.853113 containerd[1464]: time="2025-01-29T11:15:45.853038934Z" level=error msg="Failed to destroy network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:45.853963 containerd[1464]: time="2025-01-29T11:15:45.853923612Z" level=error msg="encountered an error cleaning up failed sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:45.854172 containerd[1464]: time="2025-01-29T11:15:45.854150740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:45.854798 kubelet[1802]: E0129 11:15:45.854610 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:45.855500 kubelet[1802]: E0129 11:15:45.855267 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:45.855908 kubelet[1802]: E0129 11:15:45.855638 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:45.856296 kubelet[1802]: E0129 11:15:45.856198 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hgrqg" podUID="32d19bf8-e39c-40fe-82db-979ee14e9bd5" Jan 29 11:15:46.096578 kubelet[1802]: E0129 11:15:46.096512 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:46.355037 kubelet[1802]: I0129 11:15:46.354887 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd" Jan 29 11:15:46.356344 containerd[1464]: time="2025-01-29T11:15:46.355744120Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:15:46.356344 containerd[1464]: time="2025-01-29T11:15:46.356088272Z" level=info msg="Ensure that sandbox 1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd in task-service has been cleanup successfully" Jan 29 11:15:46.361180 containerd[1464]: time="2025-01-29T11:15:46.356426969Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:15:46.361180 containerd[1464]: time="2025-01-29T11:15:46.356446409Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:15:46.361180 containerd[1464]: time="2025-01-29T11:15:46.357663141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:1,}" Jan 29 11:15:46.361333 kubelet[1802]: I0129 11:15:46.360730 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b" Jan 29 11:15:46.356698 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd-shm.mount: Deactivated successfully. Jan 29 11:15:46.361689 containerd[1464]: time="2025-01-29T11:15:46.361424002Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:15:46.364665 containerd[1464]: time="2025-01-29T11:15:46.361783628Z" level=info msg="Ensure that sandbox 021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b in task-service has been cleanup successfully" Jan 29 11:15:46.364665 containerd[1464]: time="2025-01-29T11:15:46.363579157Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:15:46.364665 containerd[1464]: time="2025-01-29T11:15:46.363614522Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:15:46.364665 containerd[1464]: time="2025-01-29T11:15:46.364278871Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:15:46.364665 containerd[1464]: time="2025-01-29T11:15:46.364388227Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:15:46.364665 containerd[1464]: time="2025-01-29T11:15:46.364399582Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:15:46.363116 systemd[1]: run-netns-cni\x2d13f98a9d\x2da356\x2d1656\x2d7096\x2d23071d3d6bb1.mount: Deactivated successfully. Jan 29 11:15:46.367409 containerd[1464]: time="2025-01-29T11:15:46.366893588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:2,}" Jan 29 11:15:46.369752 systemd[1]: run-netns-cni\x2db8371185\x2dd5cd\x2d5cd8\x2d84c1\x2db4f752864883.mount: Deactivated successfully. Jan 29 11:15:46.672180 containerd[1464]: time="2025-01-29T11:15:46.672015757Z" level=error msg="Failed to destroy network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:46.674495 containerd[1464]: time="2025-01-29T11:15:46.673490214Z" level=error msg="encountered an error cleaning up failed sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:46.674495 containerd[1464]: time="2025-01-29T11:15:46.673654953Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:46.674794 kubelet[1802]: E0129 11:15:46.673963 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:46.674794 kubelet[1802]: E0129 11:15:46.674033 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:46.674794 kubelet[1802]: E0129 11:15:46.674069 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:46.674968 kubelet[1802]: E0129 11:15:46.674126 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hgrqg" podUID="32d19bf8-e39c-40fe-82db-979ee14e9bd5" Jan 29 11:15:46.681502 containerd[1464]: time="2025-01-29T11:15:46.681233155Z" level=error msg="Failed to destroy network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:46.682101 containerd[1464]: time="2025-01-29T11:15:46.682029808Z" level=error msg="encountered an error cleaning up failed sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:46.682277 containerd[1464]: time="2025-01-29T11:15:46.682155913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:46.682748 kubelet[1802]: E0129 11:15:46.682506 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:46.682748 kubelet[1802]: E0129 11:15:46.682640 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:46.682748 kubelet[1802]: E0129 11:15:46.682670 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:46.682961 kubelet[1802]: E0129 11:15:46.682754 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:47.075839 kubelet[1802]: E0129 11:15:47.075761 1802 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:47.097846 kubelet[1802]: E0129 11:15:47.097607 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:47.355583 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a-shm.mount: Deactivated successfully. Jan 29 11:15:47.366390 kubelet[1802]: I0129 11:15:47.366167 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a" Jan 29 11:15:47.367683 containerd[1464]: time="2025-01-29T11:15:47.367343150Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:15:47.368977 containerd[1464]: time="2025-01-29T11:15:47.368357193Z" level=info msg="Ensure that sandbox c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a in task-service has been cleanup successfully" Jan 29 11:15:47.370040 containerd[1464]: time="2025-01-29T11:15:47.370015436Z" level=info msg="TearDown network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" successfully" Jan 29 11:15:47.370143 containerd[1464]: time="2025-01-29T11:15:47.370131341Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" returns successfully" Jan 29 11:15:47.370580 containerd[1464]: time="2025-01-29T11:15:47.370560763Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:15:47.372483 systemd[1]: run-netns-cni\x2dfe15ea69\x2d13b3\x2d2a58\x2d6ae9\x2d97bd1ee575fa.mount: Deactivated successfully. Jan 29 11:15:47.373928 containerd[1464]: time="2025-01-29T11:15:47.373371539Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:15:47.373928 containerd[1464]: time="2025-01-29T11:15:47.373402402Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:15:47.376298 kubelet[1802]: I0129 11:15:47.376190 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a" Jan 29 11:15:47.376679 containerd[1464]: time="2025-01-29T11:15:47.376540234Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:15:47.376940 containerd[1464]: time="2025-01-29T11:15:47.376867946Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:15:47.376940 containerd[1464]: time="2025-01-29T11:15:47.376886129Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:15:47.377905 containerd[1464]: time="2025-01-29T11:15:47.377491351Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:3,}" Jan 29 11:15:47.381736 containerd[1464]: time="2025-01-29T11:15:47.378968545Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:15:47.381736 containerd[1464]: time="2025-01-29T11:15:47.379210701Z" level=info msg="Ensure that sandbox 2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a in task-service has been cleanup successfully" Jan 29 11:15:47.381486 systemd[1]: run-netns-cni\x2d27614783\x2d106c\x2d2b52\x2d0b28\x2db72d950a35df.mount: Deactivated successfully. Jan 29 11:15:47.383226 containerd[1464]: time="2025-01-29T11:15:47.383074243Z" level=info msg="TearDown network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" successfully" Jan 29 11:15:47.383226 containerd[1464]: time="2025-01-29T11:15:47.383115372Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" returns successfully" Jan 29 11:15:47.385183 containerd[1464]: time="2025-01-29T11:15:47.385144168Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:15:47.385565 containerd[1464]: time="2025-01-29T11:15:47.385393576Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:15:47.385565 containerd[1464]: time="2025-01-29T11:15:47.385421347Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:15:47.386634 containerd[1464]: time="2025-01-29T11:15:47.386592455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:2,}" Jan 29 11:15:47.571818 containerd[1464]: time="2025-01-29T11:15:47.570981385Z" level=error msg="Failed to destroy network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:47.572566 containerd[1464]: time="2025-01-29T11:15:47.572191085Z" level=error msg="encountered an error cleaning up failed sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:47.572566 containerd[1464]: time="2025-01-29T11:15:47.572286593Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:47.572766 kubelet[1802]: E0129 11:15:47.572609 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:47.572766 kubelet[1802]: E0129 11:15:47.572686 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:47.572904 kubelet[1802]: E0129 11:15:47.572864 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:47.573216 kubelet[1802]: E0129 11:15:47.572970 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hgrqg" podUID="32d19bf8-e39c-40fe-82db-979ee14e9bd5" Jan 29 11:15:47.591071 containerd[1464]: time="2025-01-29T11:15:47.590823636Z" level=error msg="Failed to destroy network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:47.592258 containerd[1464]: time="2025-01-29T11:15:47.591998408Z" level=error msg="encountered an error cleaning up failed sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:47.592258 containerd[1464]: time="2025-01-29T11:15:47.592106345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:47.592444 kubelet[1802]: E0129 11:15:47.592382 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:47.592532 kubelet[1802]: E0129 11:15:47.592468 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:47.592532 kubelet[1802]: E0129 11:15:47.592500 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:47.592627 kubelet[1802]: E0129 11:15:47.592556 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:47.797903 systemd[1]: Created slice kubepods-besteffort-pod040eca73_48ed_48eb_91a1_9e9bd980c0dd.slice - libcontainer container kubepods-besteffort-pod040eca73_48ed_48eb_91a1_9e9bd980c0dd.slice. Jan 29 11:15:47.952576 kubelet[1802]: I0129 11:15:47.952459 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/040eca73-48ed-48eb-91a1-9e9bd980c0dd-tigera-ca-bundle\") pod \"calico-typha-f8b88c8c-lcllq\" (UID: \"040eca73-48ed-48eb-91a1-9e9bd980c0dd\") " pod="calico-system/calico-typha-f8b88c8c-lcllq" Jan 29 11:15:47.952576 kubelet[1802]: I0129 11:15:47.952505 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/040eca73-48ed-48eb-91a1-9e9bd980c0dd-typha-certs\") pod \"calico-typha-f8b88c8c-lcllq\" (UID: \"040eca73-48ed-48eb-91a1-9e9bd980c0dd\") " pod="calico-system/calico-typha-f8b88c8c-lcllq" Jan 29 11:15:47.952576 kubelet[1802]: I0129 11:15:47.952524 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5ztcn\" (UniqueName: \"kubernetes.io/projected/040eca73-48ed-48eb-91a1-9e9bd980c0dd-kube-api-access-5ztcn\") pod \"calico-typha-f8b88c8c-lcllq\" (UID: \"040eca73-48ed-48eb-91a1-9e9bd980c0dd\") " pod="calico-system/calico-typha-f8b88c8c-lcllq" Jan 29 11:15:48.098878 kubelet[1802]: E0129 11:15:48.098645 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:48.104741 kubelet[1802]: E0129 11:15:48.104564 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:48.105993 containerd[1464]: time="2025-01-29T11:15:48.105938782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f8b88c8c-lcllq,Uid:040eca73-48ed-48eb-91a1-9e9bd980c0dd,Namespace:calico-system,Attempt:0,}" Jan 29 11:15:48.156400 containerd[1464]: time="2025-01-29T11:15:48.156255290Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:48.156400 containerd[1464]: time="2025-01-29T11:15:48.156333195Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:48.156400 containerd[1464]: time="2025-01-29T11:15:48.156347383Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:48.157729 containerd[1464]: time="2025-01-29T11:15:48.156443587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:48.186986 systemd[1]: Started cri-containerd-7167b6d934dcbb0292d3a171bb26ee8340faa932e87cb1d960a5019e8ed1f52e.scope - libcontainer container 7167b6d934dcbb0292d3a171bb26ee8340faa932e87cb1d960a5019e8ed1f52e. Jan 29 11:15:48.282041 containerd[1464]: time="2025-01-29T11:15:48.281996670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-f8b88c8c-lcllq,Uid:040eca73-48ed-48eb-91a1-9e9bd980c0dd,Namespace:calico-system,Attempt:0,} returns sandbox id \"7167b6d934dcbb0292d3a171bb26ee8340faa932e87cb1d960a5019e8ed1f52e\"" Jan 29 11:15:48.283150 kubelet[1802]: E0129 11:15:48.283118 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:48.361780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be-shm.mount: Deactivated successfully. Jan 29 11:15:48.362397 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510-shm.mount: Deactivated successfully. Jan 29 11:15:48.383321 kubelet[1802]: I0129 11:15:48.382619 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510" Jan 29 11:15:48.383573 containerd[1464]: time="2025-01-29T11:15:48.383381485Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" Jan 29 11:15:48.384036 containerd[1464]: time="2025-01-29T11:15:48.383873143Z" level=info msg="Ensure that sandbox 2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510 in task-service has been cleanup successfully" Jan 29 11:15:48.387234 systemd[1]: run-netns-cni\x2d79bf6d1b\x2d18a1\x2df4da\x2df2b9\x2d746cbd225842.mount: Deactivated successfully. Jan 29 11:15:48.387899 containerd[1464]: time="2025-01-29T11:15:48.387398932Z" level=info msg="TearDown network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" successfully" Jan 29 11:15:48.387899 containerd[1464]: time="2025-01-29T11:15:48.387426868Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" returns successfully" Jan 29 11:15:48.389644 containerd[1464]: time="2025-01-29T11:15:48.389499995Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:15:48.389644 containerd[1464]: time="2025-01-29T11:15:48.389615918Z" level=info msg="TearDown network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" successfully" Jan 29 11:15:48.389644 containerd[1464]: time="2025-01-29T11:15:48.389628075Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" returns successfully" Jan 29 11:15:48.391429 containerd[1464]: time="2025-01-29T11:15:48.391057178Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:15:48.391956 containerd[1464]: time="2025-01-29T11:15:48.391932543Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:15:48.392100 containerd[1464]: time="2025-01-29T11:15:48.392084447Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:15:48.392797 kubelet[1802]: I0129 11:15:48.392772 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be" Jan 29 11:15:48.393702 containerd[1464]: time="2025-01-29T11:15:48.393366054Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:15:48.393702 containerd[1464]: time="2025-01-29T11:15:48.393467802Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:15:48.393702 containerd[1464]: time="2025-01-29T11:15:48.393479361Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:15:48.395098 containerd[1464]: time="2025-01-29T11:15:48.394690323Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" Jan 29 11:15:48.395098 containerd[1464]: time="2025-01-29T11:15:48.394924288Z" level=info msg="Ensure that sandbox 31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be in task-service has been cleanup successfully" Jan 29 11:15:48.395098 containerd[1464]: time="2025-01-29T11:15:48.395006587Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:4,}" Jan 29 11:15:48.398382 containerd[1464]: time="2025-01-29T11:15:48.398337601Z" level=info msg="TearDown network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" successfully" Jan 29 11:15:48.398382 containerd[1464]: time="2025-01-29T11:15:48.398377256Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" returns successfully" Jan 29 11:15:48.398994 systemd[1]: run-netns-cni\x2deda63b37\x2d2703\x2de3c9\x2d430f\x2d3addd3041883.mount: Deactivated successfully. Jan 29 11:15:48.401630 containerd[1464]: time="2025-01-29T11:15:48.401584253Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:15:48.402087 containerd[1464]: time="2025-01-29T11:15:48.401700515Z" level=info msg="TearDown network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" successfully" Jan 29 11:15:48.402087 containerd[1464]: time="2025-01-29T11:15:48.402084968Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" returns successfully" Jan 29 11:15:48.403443 containerd[1464]: time="2025-01-29T11:15:48.403305156Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:15:48.403443 containerd[1464]: time="2025-01-29T11:15:48.403438993Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:15:48.403590 containerd[1464]: time="2025-01-29T11:15:48.403451682Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:15:48.405375 containerd[1464]: time="2025-01-29T11:15:48.405187991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:3,}" Jan 29 11:15:48.574166 containerd[1464]: time="2025-01-29T11:15:48.574023076Z" level=error msg="Failed to destroy network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:48.574927 containerd[1464]: time="2025-01-29T11:15:48.574661160Z" level=error msg="encountered an error cleaning up failed sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:48.574927 containerd[1464]: time="2025-01-29T11:15:48.574871242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:48.575875 kubelet[1802]: E0129 11:15:48.575456 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:48.575875 kubelet[1802]: E0129 11:15:48.575526 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:48.575875 kubelet[1802]: E0129 11:15:48.575548 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:48.576088 kubelet[1802]: E0129 11:15:48.575596 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:48.598264 containerd[1464]: time="2025-01-29T11:15:48.597883253Z" level=error msg="Failed to destroy network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:48.598735 containerd[1464]: time="2025-01-29T11:15:48.598647033Z" level=error msg="encountered an error cleaning up failed sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:48.598844 containerd[1464]: time="2025-01-29T11:15:48.598758711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:48.599242 kubelet[1802]: E0129 11:15:48.598992 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:48.599242 kubelet[1802]: E0129 11:15:48.599050 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:48.599242 kubelet[1802]: E0129 11:15:48.599074 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:48.599355 kubelet[1802]: E0129 11:15:48.599118 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hgrqg" podUID="32d19bf8-e39c-40fe-82db-979ee14e9bd5" Jan 29 11:15:49.099236 kubelet[1802]: E0129 11:15:49.099146 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:49.359819 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4-shm.mount: Deactivated successfully. Jan 29 11:15:49.404805 kubelet[1802]: I0129 11:15:49.404225 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4" Jan 29 11:15:49.406311 containerd[1464]: time="2025-01-29T11:15:49.405169729Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\"" Jan 29 11:15:49.406311 containerd[1464]: time="2025-01-29T11:15:49.406089219Z" level=info msg="Ensure that sandbox 2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4 in task-service has been cleanup successfully" Jan 29 11:15:49.409819 containerd[1464]: time="2025-01-29T11:15:49.406826425Z" level=info msg="TearDown network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" successfully" Jan 29 11:15:49.409819 containerd[1464]: time="2025-01-29T11:15:49.406855044Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" returns successfully" Jan 29 11:15:49.410487 systemd[1]: run-netns-cni\x2d20017308\x2db324\x2d9b54\x2d5ad4\x2d9c3293f930cf.mount: Deactivated successfully. Jan 29 11:15:49.411046 containerd[1464]: time="2025-01-29T11:15:49.410950145Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" Jan 29 11:15:49.411135 containerd[1464]: time="2025-01-29T11:15:49.411081639Z" level=info msg="TearDown network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" successfully" Jan 29 11:15:49.411135 containerd[1464]: time="2025-01-29T11:15:49.411099412Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" returns successfully" Jan 29 11:15:49.418241 containerd[1464]: time="2025-01-29T11:15:49.418173357Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:15:49.418435 containerd[1464]: time="2025-01-29T11:15:49.418321499Z" level=info msg="TearDown network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" successfully" Jan 29 11:15:49.418435 containerd[1464]: time="2025-01-29T11:15:49.418339294Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" returns successfully" Jan 29 11:15:49.420929 containerd[1464]: time="2025-01-29T11:15:49.420851711Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:15:49.421064 containerd[1464]: time="2025-01-29T11:15:49.420972270Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:15:49.421064 containerd[1464]: time="2025-01-29T11:15:49.420986540Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:15:49.422412 containerd[1464]: time="2025-01-29T11:15:49.422358834Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:15:49.422532 containerd[1464]: time="2025-01-29T11:15:49.422485047Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:15:49.422532 containerd[1464]: time="2025-01-29T11:15:49.422499578Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:15:49.423552 kubelet[1802]: I0129 11:15:49.422773 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b" Jan 29 11:15:49.423647 containerd[1464]: time="2025-01-29T11:15:49.423180825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:5,}" Jan 29 11:15:49.430033 containerd[1464]: time="2025-01-29T11:15:49.429849052Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\"" Jan 29 11:15:49.430609 containerd[1464]: time="2025-01-29T11:15:49.430489952Z" level=info msg="Ensure that sandbox 0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b in task-service has been cleanup successfully" Jan 29 11:15:49.433324 containerd[1464]: time="2025-01-29T11:15:49.432099807Z" level=info msg="TearDown network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" successfully" Jan 29 11:15:49.433324 containerd[1464]: time="2025-01-29T11:15:49.433201355Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" returns successfully" Jan 29 11:15:49.435907 systemd[1]: run-netns-cni\x2db4e4ad45\x2d76a0\x2dc187\x2dabe6\x2d7bbf532fb0e6.mount: Deactivated successfully. Jan 29 11:15:49.440176 containerd[1464]: time="2025-01-29T11:15:49.440100896Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" Jan 29 11:15:49.444777 systemd[1]: Created slice kubepods-besteffort-pod0c2b41a2_08c4_4b60_9a67_d75bd55f7076.slice - libcontainer container kubepods-besteffort-pod0c2b41a2_08c4_4b60_9a67_d75bd55f7076.slice. Jan 29 11:15:49.445103 containerd[1464]: time="2025-01-29T11:15:49.440224716Z" level=info msg="TearDown network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" successfully" Jan 29 11:15:49.445175 containerd[1464]: time="2025-01-29T11:15:49.445092953Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" returns successfully" Jan 29 11:15:49.449442 containerd[1464]: time="2025-01-29T11:15:49.448774491Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:15:49.450352 containerd[1464]: time="2025-01-29T11:15:49.449934958Z" level=info msg="TearDown network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" successfully" Jan 29 11:15:49.450352 containerd[1464]: time="2025-01-29T11:15:49.450206118Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" returns successfully" Jan 29 11:15:49.452206 containerd[1464]: time="2025-01-29T11:15:49.451844128Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:15:49.452206 containerd[1464]: time="2025-01-29T11:15:49.452166229Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:15:49.452206 containerd[1464]: time="2025-01-29T11:15:49.452182985Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:15:49.454128 containerd[1464]: time="2025-01-29T11:15:49.453694021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:4,}" Jan 29 11:15:49.564052 kubelet[1802]: I0129 11:15:49.563966 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fh9n\" (UniqueName: \"kubernetes.io/projected/0c2b41a2-08c4-4b60-9a67-d75bd55f7076-kube-api-access-4fh9n\") pod \"calico-kube-controllers-65c99dc6fb-xsr5m\" (UID: \"0c2b41a2-08c4-4b60-9a67-d75bd55f7076\") " pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" Jan 29 11:15:49.564250 kubelet[1802]: I0129 11:15:49.564097 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0c2b41a2-08c4-4b60-9a67-d75bd55f7076-tigera-ca-bundle\") pod \"calico-kube-controllers-65c99dc6fb-xsr5m\" (UID: \"0c2b41a2-08c4-4b60-9a67-d75bd55f7076\") " pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" Jan 29 11:15:49.684152 containerd[1464]: time="2025-01-29T11:15:49.683025352Z" level=error msg="Failed to destroy network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:49.684794 containerd[1464]: time="2025-01-29T11:15:49.684630732Z" level=error msg="encountered an error cleaning up failed sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:49.684794 containerd[1464]: time="2025-01-29T11:15:49.684732867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:49.685570 kubelet[1802]: E0129 11:15:49.685519 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:49.685735 kubelet[1802]: E0129 11:15:49.685602 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:49.685735 kubelet[1802]: E0129 11:15:49.685633 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:49.685735 kubelet[1802]: E0129 11:15:49.685691 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:49.715795 containerd[1464]: time="2025-01-29T11:15:49.715701037Z" level=error msg="Failed to destroy network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:49.716766 containerd[1464]: time="2025-01-29T11:15:49.716658128Z" level=error msg="encountered an error cleaning up failed sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:49.717454 containerd[1464]: time="2025-01-29T11:15:49.717409063Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:49.722300 kubelet[1802]: E0129 11:15:49.722036 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:49.722300 kubelet[1802]: E0129 11:15:49.722134 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:49.722300 kubelet[1802]: E0129 11:15:49.722169 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:49.722532 kubelet[1802]: E0129 11:15:49.722214 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hgrqg" podUID="32d19bf8-e39c-40fe-82db-979ee14e9bd5" Jan 29 11:15:49.750423 containerd[1464]: time="2025-01-29T11:15:49.749921497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c99dc6fb-xsr5m,Uid:0c2b41a2-08c4-4b60-9a67-d75bd55f7076,Namespace:calico-system,Attempt:0,}" Jan 29 11:15:50.034532 containerd[1464]: time="2025-01-29T11:15:50.034390862Z" level=error msg="Failed to destroy network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.035754 containerd[1464]: time="2025-01-29T11:15:50.034928385Z" level=error msg="encountered an error cleaning up failed sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.035754 containerd[1464]: time="2025-01-29T11:15:50.035340196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c99dc6fb-xsr5m,Uid:0c2b41a2-08c4-4b60-9a67-d75bd55f7076,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.035993 kubelet[1802]: E0129 11:15:50.035598 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.035993 kubelet[1802]: E0129 11:15:50.035663 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" Jan 29 11:15:50.035993 kubelet[1802]: E0129 11:15:50.035684 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" Jan 29 11:15:50.036170 kubelet[1802]: E0129 11:15:50.036060 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65c99dc6fb-xsr5m_calico-system(0c2b41a2-08c4-4b60-9a67-d75bd55f7076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65c99dc6fb-xsr5m_calico-system(0c2b41a2-08c4-4b60-9a67-d75bd55f7076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" podUID="0c2b41a2-08c4-4b60-9a67-d75bd55f7076" Jan 29 11:15:50.100420 kubelet[1802]: E0129 11:15:50.099791 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:50.363973 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e-shm.mount: Deactivated successfully. Jan 29 11:15:50.430393 kubelet[1802]: I0129 11:15:50.429976 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7" Jan 29 11:15:50.435050 containerd[1464]: time="2025-01-29T11:15:50.434194139Z" level=info msg="StopPodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\"" Jan 29 11:15:50.435678 containerd[1464]: time="2025-01-29T11:15:50.435152332Z" level=info msg="Ensure that sandbox 3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7 in task-service has been cleanup successfully" Jan 29 11:15:50.440768 containerd[1464]: time="2025-01-29T11:15:50.436121473Z" level=info msg="TearDown network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" successfully" Jan 29 11:15:50.440768 containerd[1464]: time="2025-01-29T11:15:50.436161769Z" level=info msg="StopPodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" returns successfully" Jan 29 11:15:50.440768 containerd[1464]: time="2025-01-29T11:15:50.439106987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c99dc6fb-xsr5m,Uid:0c2b41a2-08c4-4b60-9a67-d75bd55f7076,Namespace:calico-system,Attempt:1,}" Jan 29 11:15:50.442080 systemd[1]: run-netns-cni\x2d214ca8bb\x2d9ff9\x2d085d\x2d2af0\x2deb4cce7e7313.mount: Deactivated successfully. Jan 29 11:15:50.456785 kubelet[1802]: I0129 11:15:50.456742 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e" Jan 29 11:15:50.459091 containerd[1464]: time="2025-01-29T11:15:50.459038473Z" level=info msg="StopPodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\"" Jan 29 11:15:50.459355 containerd[1464]: time="2025-01-29T11:15:50.459313830Z" level=info msg="Ensure that sandbox 9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e in task-service has been cleanup successfully" Jan 29 11:15:50.462281 containerd[1464]: time="2025-01-29T11:15:50.461805149Z" level=info msg="TearDown network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" successfully" Jan 29 11:15:50.462281 containerd[1464]: time="2025-01-29T11:15:50.461854399Z" level=info msg="StopPodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" returns successfully" Jan 29 11:15:50.464520 containerd[1464]: time="2025-01-29T11:15:50.464160547Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\"" Jan 29 11:15:50.464520 containerd[1464]: time="2025-01-29T11:15:50.464354590Z" level=info msg="TearDown network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" successfully" Jan 29 11:15:50.464520 containerd[1464]: time="2025-01-29T11:15:50.464373633Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" returns successfully" Jan 29 11:15:50.465161 systemd[1]: run-netns-cni\x2daecdbc53\x2d5575\x2d5ef9\x2d1f5c\x2d5e83c425a325.mount: Deactivated successfully. Jan 29 11:15:50.473610 containerd[1464]: time="2025-01-29T11:15:50.473003360Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" Jan 29 11:15:50.473610 containerd[1464]: time="2025-01-29T11:15:50.473315073Z" level=info msg="TearDown network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" successfully" Jan 29 11:15:50.473610 containerd[1464]: time="2025-01-29T11:15:50.473340788Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" returns successfully" Jan 29 11:15:50.475219 containerd[1464]: time="2025-01-29T11:15:50.475073676Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:15:50.476304 containerd[1464]: time="2025-01-29T11:15:50.476168257Z" level=info msg="TearDown network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" successfully" Jan 29 11:15:50.476304 containerd[1464]: time="2025-01-29T11:15:50.476198697Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" returns successfully" Jan 29 11:15:50.477643 containerd[1464]: time="2025-01-29T11:15:50.476893227Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:15:50.477643 containerd[1464]: time="2025-01-29T11:15:50.477036927Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:15:50.477643 containerd[1464]: time="2025-01-29T11:15:50.477057728Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:15:50.478479 containerd[1464]: time="2025-01-29T11:15:50.478392664Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:15:50.478594 containerd[1464]: time="2025-01-29T11:15:50.478522542Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:15:50.478594 containerd[1464]: time="2025-01-29T11:15:50.478537833Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:15:50.480248 kubelet[1802]: I0129 11:15:50.480201 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25" Jan 29 11:15:50.480836 containerd[1464]: time="2025-01-29T11:15:50.480679232Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:6,}" Jan 29 11:15:50.481655 containerd[1464]: time="2025-01-29T11:15:50.481624403Z" level=info msg="StopPodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\"" Jan 29 11:15:50.482661 containerd[1464]: time="2025-01-29T11:15:50.482621543Z" level=info msg="Ensure that sandbox 8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25 in task-service has been cleanup successfully" Jan 29 11:15:50.486985 systemd[1]: run-netns-cni\x2dc02bd737\x2d3e97\x2dfaea\x2da3dc\x2d490b0b033ff3.mount: Deactivated successfully. Jan 29 11:15:50.488679 containerd[1464]: time="2025-01-29T11:15:50.488171477Z" level=info msg="TearDown network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" successfully" Jan 29 11:15:50.488679 containerd[1464]: time="2025-01-29T11:15:50.488219595Z" level=info msg="StopPodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" returns successfully" Jan 29 11:15:50.489944 containerd[1464]: time="2025-01-29T11:15:50.489644924Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\"" Jan 29 11:15:50.489944 containerd[1464]: time="2025-01-29T11:15:50.489883637Z" level=info msg="TearDown network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" successfully" Jan 29 11:15:50.493549 containerd[1464]: time="2025-01-29T11:15:50.489906695Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" returns successfully" Jan 29 11:15:50.493549 containerd[1464]: time="2025-01-29T11:15:50.491202475Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" Jan 29 11:15:50.493549 containerd[1464]: time="2025-01-29T11:15:50.491643090Z" level=info msg="TearDown network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" successfully" Jan 29 11:15:50.493549 containerd[1464]: time="2025-01-29T11:15:50.491663548Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" returns successfully" Jan 29 11:15:50.493549 containerd[1464]: time="2025-01-29T11:15:50.492475888Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:15:50.493549 containerd[1464]: time="2025-01-29T11:15:50.492594629Z" level=info msg="TearDown network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" successfully" Jan 29 11:15:50.493549 containerd[1464]: time="2025-01-29T11:15:50.492610312Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" returns successfully" Jan 29 11:15:50.494147 containerd[1464]: time="2025-01-29T11:15:50.494039122Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:15:50.494322 containerd[1464]: time="2025-01-29T11:15:50.494276956Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:15:50.494397 containerd[1464]: time="2025-01-29T11:15:50.494384249Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:15:50.496275 containerd[1464]: time="2025-01-29T11:15:50.496195408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:5,}" Jan 29 11:15:50.751478 containerd[1464]: time="2025-01-29T11:15:50.751184084Z" level=error msg="Failed to destroy network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.754226 containerd[1464]: time="2025-01-29T11:15:50.753958851Z" level=error msg="encountered an error cleaning up failed sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.754876 containerd[1464]: time="2025-01-29T11:15:50.754500521Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.759760 kubelet[1802]: E0129 11:15:50.758769 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.759760 kubelet[1802]: E0129 11:15:50.758857 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:50.759760 kubelet[1802]: E0129 11:15:50.758885 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:50.760044 kubelet[1802]: E0129 11:15:50.758938 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:50.801388 containerd[1464]: time="2025-01-29T11:15:50.801302954Z" level=error msg="Failed to destroy network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.802660 containerd[1464]: time="2025-01-29T11:15:50.802504998Z" level=error msg="encountered an error cleaning up failed sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.802993 containerd[1464]: time="2025-01-29T11:15:50.802951365Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c99dc6fb-xsr5m,Uid:0c2b41a2-08c4-4b60-9a67-d75bd55f7076,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.803542 kubelet[1802]: E0129 11:15:50.803484 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.803670 kubelet[1802]: E0129 11:15:50.803569 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" Jan 29 11:15:50.803670 kubelet[1802]: E0129 11:15:50.803602 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" Jan 29 11:15:50.803881 kubelet[1802]: E0129 11:15:50.803653 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65c99dc6fb-xsr5m_calico-system(0c2b41a2-08c4-4b60-9a67-d75bd55f7076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65c99dc6fb-xsr5m_calico-system(0c2b41a2-08c4-4b60-9a67-d75bd55f7076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" podUID="0c2b41a2-08c4-4b60-9a67-d75bd55f7076" Jan 29 11:15:50.845319 containerd[1464]: time="2025-01-29T11:15:50.844865358Z" level=error msg="Failed to destroy network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.851902 containerd[1464]: time="2025-01-29T11:15:50.851636271Z" level=error msg="encountered an error cleaning up failed sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.851902 containerd[1464]: time="2025-01-29T11:15:50.851842871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:5,} failed, error" error="failed to setup network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.852396 kubelet[1802]: E0129 11:15:50.852207 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:50.852396 kubelet[1802]: E0129 11:15:50.852297 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:50.852396 kubelet[1802]: E0129 11:15:50.852328 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:50.852783 kubelet[1802]: E0129 11:15:50.852401 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hgrqg" podUID="32d19bf8-e39c-40fe-82db-979ee14e9bd5" Jan 29 11:15:51.100947 kubelet[1802]: E0129 11:15:51.100883 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:51.361974 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe-shm.mount: Deactivated successfully. Jan 29 11:15:51.486761 kubelet[1802]: I0129 11:15:51.485728 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef" Jan 29 11:15:51.487515 containerd[1464]: time="2025-01-29T11:15:51.487465914Z" level=info msg="StopPodSandbox for \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\"" Jan 29 11:15:51.487915 containerd[1464]: time="2025-01-29T11:15:51.487764379Z" level=info msg="Ensure that sandbox 8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef in task-service has been cleanup successfully" Jan 29 11:15:51.490012 containerd[1464]: time="2025-01-29T11:15:51.489966262Z" level=info msg="TearDown network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\" successfully" Jan 29 11:15:51.490012 containerd[1464]: time="2025-01-29T11:15:51.490007468Z" level=info msg="StopPodSandbox for \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\" returns successfully" Jan 29 11:15:51.491990 containerd[1464]: time="2025-01-29T11:15:51.490815776Z" level=info msg="StopPodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\"" Jan 29 11:15:51.491990 containerd[1464]: time="2025-01-29T11:15:51.490918698Z" level=info msg="TearDown network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" successfully" Jan 29 11:15:51.491990 containerd[1464]: time="2025-01-29T11:15:51.490934807Z" level=info msg="StopPodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" returns successfully" Jan 29 11:15:51.492695 systemd[1]: run-netns-cni\x2d7dd29e3f\x2da030\x2d0481\x2d3a4c\x2d66e612ac50f9.mount: Deactivated successfully. Jan 29 11:15:51.495582 containerd[1464]: time="2025-01-29T11:15:51.494141352Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\"" Jan 29 11:15:51.496156 containerd[1464]: time="2025-01-29T11:15:51.495944830Z" level=info msg="TearDown network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" successfully" Jan 29 11:15:51.496156 containerd[1464]: time="2025-01-29T11:15:51.495985981Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" returns successfully" Jan 29 11:15:51.498499 containerd[1464]: time="2025-01-29T11:15:51.498105457Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" Jan 29 11:15:51.498499 containerd[1464]: time="2025-01-29T11:15:51.498282531Z" level=info msg="TearDown network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" successfully" Jan 29 11:15:51.498499 containerd[1464]: time="2025-01-29T11:15:51.498309605Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" returns successfully" Jan 29 11:15:51.499465 containerd[1464]: time="2025-01-29T11:15:51.499270402Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:15:51.499573 containerd[1464]: time="2025-01-29T11:15:51.499401993Z" level=info msg="TearDown network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" successfully" Jan 29 11:15:51.499573 containerd[1464]: time="2025-01-29T11:15:51.499509460Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" returns successfully" Jan 29 11:15:51.501306 containerd[1464]: time="2025-01-29T11:15:51.500905826Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:15:51.501306 containerd[1464]: time="2025-01-29T11:15:51.501034641Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:15:51.501306 containerd[1464]: time="2025-01-29T11:15:51.501047573Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:15:51.501791 kubelet[1802]: I0129 11:15:51.501755 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004" Jan 29 11:15:51.502506 containerd[1464]: time="2025-01-29T11:15:51.502301826Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:15:51.502506 containerd[1464]: time="2025-01-29T11:15:51.502391901Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:15:51.502506 containerd[1464]: time="2025-01-29T11:15:51.502402235Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:15:51.502961 containerd[1464]: time="2025-01-29T11:15:51.502929675Z" level=info msg="StopPodSandbox for \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\"" Jan 29 11:15:51.503357 containerd[1464]: time="2025-01-29T11:15:51.503330131Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:7,}" Jan 29 11:15:51.503528 containerd[1464]: time="2025-01-29T11:15:51.503335215Z" level=info msg="Ensure that sandbox 71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004 in task-service has been cleanup successfully" Jan 29 11:15:51.505739 containerd[1464]: time="2025-01-29T11:15:51.503906696Z" level=info msg="TearDown network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\" successfully" Jan 29 11:15:51.507594 systemd[1]: run-netns-cni\x2db48d778e\x2d5687\x2d936b\x2dccc3\x2debff9ff2ba1c.mount: Deactivated successfully. Jan 29 11:15:51.508567 containerd[1464]: time="2025-01-29T11:15:51.508336177Z" level=info msg="StopPodSandbox for \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\" returns successfully" Jan 29 11:15:51.509672 containerd[1464]: time="2025-01-29T11:15:51.509525971Z" level=info msg="StopPodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\"" Jan 29 11:15:51.509871 containerd[1464]: time="2025-01-29T11:15:51.509807938Z" level=info msg="TearDown network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" successfully" Jan 29 11:15:51.509871 containerd[1464]: time="2025-01-29T11:15:51.509827908Z" level=info msg="StopPodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" returns successfully" Jan 29 11:15:51.512291 containerd[1464]: time="2025-01-29T11:15:51.512246040Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\"" Jan 29 11:15:51.512521 containerd[1464]: time="2025-01-29T11:15:51.512369183Z" level=info msg="TearDown network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" successfully" Jan 29 11:15:51.512521 containerd[1464]: time="2025-01-29T11:15:51.512380800Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" returns successfully" Jan 29 11:15:51.513327 kubelet[1802]: I0129 11:15:51.512912 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe" Jan 29 11:15:51.513885 containerd[1464]: time="2025-01-29T11:15:51.513841260Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" Jan 29 11:15:51.514474 containerd[1464]: time="2025-01-29T11:15:51.514317392Z" level=info msg="TearDown network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" successfully" Jan 29 11:15:51.514474 containerd[1464]: time="2025-01-29T11:15:51.514345345Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" returns successfully" Jan 29 11:15:51.514474 containerd[1464]: time="2025-01-29T11:15:51.514231518Z" level=info msg="StopPodSandbox for \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\"" Jan 29 11:15:51.514641 containerd[1464]: time="2025-01-29T11:15:51.514618616Z" level=info msg="Ensure that sandbox 6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe in task-service has been cleanup successfully" Jan 29 11:15:51.517796 systemd[1]: run-netns-cni\x2d5fd996c2\x2d94a2\x2d27ff\x2d9f74\x2d837ea45bd9cd.mount: Deactivated successfully. Jan 29 11:15:51.519465 containerd[1464]: time="2025-01-29T11:15:51.519302712Z" level=info msg="TearDown network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\" successfully" Jan 29 11:15:51.519465 containerd[1464]: time="2025-01-29T11:15:51.519339966Z" level=info msg="StopPodSandbox for \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\" returns successfully" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.519962759Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.520074234Z" level=info msg="TearDown network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" successfully" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.520087806Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" returns successfully" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.521854343Z" level=info msg="StopPodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\"" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.521959122Z" level=info msg="TearDown network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" successfully" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.521970719Z" level=info msg="StopPodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" returns successfully" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.522042083Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.522119580Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:15:51.522484 containerd[1464]: time="2025-01-29T11:15:51.522132186Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:15:51.524475 containerd[1464]: time="2025-01-29T11:15:51.524193809Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c99dc6fb-xsr5m,Uid:0c2b41a2-08c4-4b60-9a67-d75bd55f7076,Namespace:calico-system,Attempt:2,}" Jan 29 11:15:51.526306 containerd[1464]: time="2025-01-29T11:15:51.526133135Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:6,}" Jan 29 11:15:51.789667 containerd[1464]: time="2025-01-29T11:15:51.789405217Z" level=error msg="Failed to destroy network for sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.792446 containerd[1464]: time="2025-01-29T11:15:51.792395291Z" level=error msg="encountered an error cleaning up failed sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.795919 containerd[1464]: time="2025-01-29T11:15:51.795842215Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c99dc6fb-xsr5m,Uid:0c2b41a2-08c4-4b60-9a67-d75bd55f7076,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.796146 containerd[1464]: time="2025-01-29T11:15:51.793933912Z" level=error msg="Failed to destroy network for sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.796426 containerd[1464]: time="2025-01-29T11:15:51.796366444Z" level=error msg="encountered an error cleaning up failed sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.797665 containerd[1464]: time="2025-01-29T11:15:51.796446233Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:7,} failed, error" error="failed to setup network for sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.797880 kubelet[1802]: E0129 11:15:51.796728 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.797880 kubelet[1802]: E0129 11:15:51.796823 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" Jan 29 11:15:51.797880 kubelet[1802]: E0129 11:15:51.796919 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.797880 kubelet[1802]: E0129 11:15:51.796978 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:51.798236 kubelet[1802]: E0129 11:15:51.797005 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dx4jg" Jan 29 11:15:51.798236 kubelet[1802]: E0129 11:15:51.797059 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dx4jg_calico-system(17de0467-9566-43ce-a406-ef6b976cb6c5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dx4jg" podUID="17de0467-9566-43ce-a406-ef6b976cb6c5" Jan 29 11:15:51.798236 kubelet[1802]: E0129 11:15:51.797766 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" Jan 29 11:15:51.798455 kubelet[1802]: E0129 11:15:51.797847 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-65c99dc6fb-xsr5m_calico-system(0c2b41a2-08c4-4b60-9a67-d75bd55f7076)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-65c99dc6fb-xsr5m_calico-system(0c2b41a2-08c4-4b60-9a67-d75bd55f7076)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" podUID="0c2b41a2-08c4-4b60-9a67-d75bd55f7076" Jan 29 11:15:51.822385 containerd[1464]: time="2025-01-29T11:15:51.822132719Z" level=error msg="Failed to destroy network for sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.823352 containerd[1464]: time="2025-01-29T11:15:51.823010359Z" level=error msg="encountered an error cleaning up failed sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.823352 containerd[1464]: time="2025-01-29T11:15:51.823107069Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:6,} failed, error" error="failed to setup network for sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.823585 kubelet[1802]: E0129 11:15:51.823388 1802 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 29 11:15:51.823960 kubelet[1802]: E0129 11:15:51.823661 1802 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:51.824065 kubelet[1802]: E0129 11:15:51.823993 1802 kuberuntime_manager.go:1168] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-hgrqg" Jan 29 11:15:51.824769 kubelet[1802]: E0129 11:15:51.824164 1802 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-hgrqg_default(32d19bf8-e39c-40fe-82db-979ee14e9bd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-hgrqg" podUID="32d19bf8-e39c-40fe-82db-979ee14e9bd5" Jan 29 11:15:52.011765 containerd[1464]: time="2025-01-29T11:15:52.011457380Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:52.013985 containerd[1464]: time="2025-01-29T11:15:52.013648035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Jan 29 11:15:52.015741 containerd[1464]: time="2025-01-29T11:15:52.015522087Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:52.020259 containerd[1464]: time="2025-01-29T11:15:52.020192944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:52.021507 containerd[1464]: time="2025-01-29T11:15:52.021132589Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 8.677211396s" Jan 29 11:15:52.021507 containerd[1464]: time="2025-01-29T11:15:52.021186333Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Jan 29 11:15:52.024002 containerd[1464]: time="2025-01-29T11:15:52.023751755Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\"" Jan 29 11:15:52.033929 containerd[1464]: time="2025-01-29T11:15:52.033878190Z" level=info msg="CreateContainer within sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:15:52.064825 containerd[1464]: time="2025-01-29T11:15:52.064746463Z" level=info msg="CreateContainer within sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\"" Jan 29 11:15:52.066327 containerd[1464]: time="2025-01-29T11:15:52.066090732Z" level=info msg="StartContainer for \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\"" Jan 29 11:15:52.102353 kubelet[1802]: E0129 11:15:52.102259 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:52.192167 systemd[1]: Started cri-containerd-f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79.scope - libcontainer container f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79. Jan 29 11:15:52.254269 containerd[1464]: time="2025-01-29T11:15:52.253469771Z" level=info msg="StartContainer for \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\" returns successfully" Jan 29 11:15:52.351758 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 29 11:15:52.352044 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 29 11:15:52.365858 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a-shm.mount: Deactivated successfully. Jan 29 11:15:52.366281 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3565577424.mount: Deactivated successfully. Jan 29 11:15:52.523789 kubelet[1802]: I0129 11:15:52.523677 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a" Jan 29 11:15:52.525681 containerd[1464]: time="2025-01-29T11:15:52.525421571Z" level=info msg="StopPodSandbox for \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\"" Jan 29 11:15:52.526402 containerd[1464]: time="2025-01-29T11:15:52.525703553Z" level=info msg="Ensure that sandbox 4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a in task-service has been cleanup successfully" Jan 29 11:15:52.528135 containerd[1464]: time="2025-01-29T11:15:52.528006531Z" level=info msg="TearDown network for sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\" successfully" Jan 29 11:15:52.528135 containerd[1464]: time="2025-01-29T11:15:52.528047937Z" level=info msg="StopPodSandbox for \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\" returns successfully" Jan 29 11:15:52.529430 containerd[1464]: time="2025-01-29T11:15:52.528691754Z" level=info msg="StopPodSandbox for \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\"" Jan 29 11:15:52.529430 containerd[1464]: time="2025-01-29T11:15:52.528855283Z" level=info msg="TearDown network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\" successfully" Jan 29 11:15:52.529430 containerd[1464]: time="2025-01-29T11:15:52.528877236Z" level=info msg="StopPodSandbox for \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\" returns successfully" Jan 29 11:15:52.532389 containerd[1464]: time="2025-01-29T11:15:52.531858823Z" level=info msg="StopPodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\"" Jan 29 11:15:52.533136 containerd[1464]: time="2025-01-29T11:15:52.532848746Z" level=info msg="TearDown network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" successfully" Jan 29 11:15:52.533136 containerd[1464]: time="2025-01-29T11:15:52.532998280Z" level=info msg="StopPodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" returns successfully" Jan 29 11:15:52.533554 systemd[1]: run-netns-cni\x2de2093e01\x2d0bee\x2d3331\x2dcb14\x2db6a80dce9cc8.mount: Deactivated successfully. Jan 29 11:15:52.537036 containerd[1464]: time="2025-01-29T11:15:52.536533766Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\"" Jan 29 11:15:52.537036 containerd[1464]: time="2025-01-29T11:15:52.536883134Z" level=info msg="TearDown network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" successfully" Jan 29 11:15:52.537036 containerd[1464]: time="2025-01-29T11:15:52.536911977Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" returns successfully" Jan 29 11:15:52.538109 containerd[1464]: time="2025-01-29T11:15:52.538077146Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" Jan 29 11:15:52.538297 containerd[1464]: time="2025-01-29T11:15:52.538252170Z" level=info msg="TearDown network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" successfully" Jan 29 11:15:52.538371 containerd[1464]: time="2025-01-29T11:15:52.538279348Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" returns successfully" Jan 29 11:15:52.539821 containerd[1464]: time="2025-01-29T11:15:52.539248682Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:15:52.539821 containerd[1464]: time="2025-01-29T11:15:52.539371477Z" level=info msg="TearDown network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" successfully" Jan 29 11:15:52.539821 containerd[1464]: time="2025-01-29T11:15:52.539388634Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" returns successfully" Jan 29 11:15:52.540043 kubelet[1802]: I0129 11:15:52.539556 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1" Jan 29 11:15:52.542741 containerd[1464]: time="2025-01-29T11:15:52.541149340Z" level=info msg="StopPodSandbox for \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\"" Jan 29 11:15:52.542741 containerd[1464]: time="2025-01-29T11:15:52.541191073Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:15:52.542741 containerd[1464]: time="2025-01-29T11:15:52.541342451Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:15:52.542741 containerd[1464]: time="2025-01-29T11:15:52.541359127Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:15:52.542741 containerd[1464]: time="2025-01-29T11:15:52.541522489Z" level=info msg="Ensure that sandbox 3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1 in task-service has been cleanup successfully" Jan 29 11:15:52.542741 containerd[1464]: time="2025-01-29T11:15:52.542012769Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:15:52.542741 containerd[1464]: time="2025-01-29T11:15:52.542084986Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:15:52.542741 containerd[1464]: time="2025-01-29T11:15:52.542095036Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:15:52.543173 containerd[1464]: time="2025-01-29T11:15:52.542780704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:8,}" Jan 29 11:15:52.545753 containerd[1464]: time="2025-01-29T11:15:52.543388580Z" level=info msg="TearDown network for sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\" successfully" Jan 29 11:15:52.545753 containerd[1464]: time="2025-01-29T11:15:52.543423033Z" level=info msg="StopPodSandbox for \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\" returns successfully" Jan 29 11:15:52.547741 containerd[1464]: time="2025-01-29T11:15:52.546045317Z" level=info msg="StopPodSandbox for \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\"" Jan 29 11:15:52.547741 containerd[1464]: time="2025-01-29T11:15:52.546164101Z" level=info msg="TearDown network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\" successfully" Jan 29 11:15:52.547741 containerd[1464]: time="2025-01-29T11:15:52.546179336Z" level=info msg="StopPodSandbox for \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\" returns successfully" Jan 29 11:15:52.548441 containerd[1464]: time="2025-01-29T11:15:52.548393756Z" level=info msg="StopPodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\"" Jan 29 11:15:52.548546 containerd[1464]: time="2025-01-29T11:15:52.548522279Z" level=info msg="TearDown network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" successfully" Jan 29 11:15:52.548614 containerd[1464]: time="2025-01-29T11:15:52.548543938Z" level=info msg="StopPodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" returns successfully" Jan 29 11:15:52.549028 kubelet[1802]: I0129 11:15:52.548974 1802 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283" Jan 29 11:15:52.550697 systemd[1]: run-netns-cni\x2d0bdb939a\x2dc771\x2dfaa5\x2dbf60\x2d92a1b44f71a7.mount: Deactivated successfully. Jan 29 11:15:52.554103 containerd[1464]: time="2025-01-29T11:15:52.551101673Z" level=info msg="StopPodSandbox for \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\"" Jan 29 11:15:52.554103 containerd[1464]: time="2025-01-29T11:15:52.551401773Z" level=info msg="Ensure that sandbox d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283 in task-service has been cleanup successfully" Jan 29 11:15:52.556815 containerd[1464]: time="2025-01-29T11:15:52.556451967Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\"" Jan 29 11:15:52.556815 containerd[1464]: time="2025-01-29T11:15:52.556624016Z" level=info msg="TearDown network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" successfully" Jan 29 11:15:52.556815 containerd[1464]: time="2025-01-29T11:15:52.556641667Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" returns successfully" Jan 29 11:15:52.559816 containerd[1464]: time="2025-01-29T11:15:52.558332292Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" Jan 29 11:15:52.559816 containerd[1464]: time="2025-01-29T11:15:52.558489167Z" level=info msg="TearDown network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" successfully" Jan 29 11:15:52.559816 containerd[1464]: time="2025-01-29T11:15:52.558503467Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" returns successfully" Jan 29 11:15:52.559816 containerd[1464]: time="2025-01-29T11:15:52.559632490Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:15:52.559816 containerd[1464]: time="2025-01-29T11:15:52.559742901Z" level=info msg="TearDown network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" successfully" Jan 29 11:15:52.559816 containerd[1464]: time="2025-01-29T11:15:52.559754162Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" returns successfully" Jan 29 11:15:52.561295 containerd[1464]: time="2025-01-29T11:15:52.560435112Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:15:52.561295 containerd[1464]: time="2025-01-29T11:15:52.560779387Z" level=info msg="TearDown network for sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\" successfully" Jan 29 11:15:52.561295 containerd[1464]: time="2025-01-29T11:15:52.560803939Z" level=info msg="StopPodSandbox for \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\" returns successfully" Jan 29 11:15:52.561295 containerd[1464]: time="2025-01-29T11:15:52.560845114Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:15:52.561295 containerd[1464]: time="2025-01-29T11:15:52.560864997Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:15:52.564561 systemd[1]: run-netns-cni\x2defce75eb\x2d2d0f\x2d4ef1\x2dafed\x2dc1acd8543bf5.mount: Deactivated successfully. Jan 29 11:15:52.567973 containerd[1464]: time="2025-01-29T11:15:52.565145810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:7,}" Jan 29 11:15:52.567973 containerd[1464]: time="2025-01-29T11:15:52.565593786Z" level=info msg="StopPodSandbox for \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\"" Jan 29 11:15:52.567973 containerd[1464]: time="2025-01-29T11:15:52.565728360Z" level=info msg="TearDown network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\" successfully" Jan 29 11:15:52.567973 containerd[1464]: time="2025-01-29T11:15:52.565747548Z" level=info msg="StopPodSandbox for \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\" returns successfully" Jan 29 11:15:52.577745 containerd[1464]: time="2025-01-29T11:15:52.576590041Z" level=info msg="StopPodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\"" Jan 29 11:15:52.578193 containerd[1464]: time="2025-01-29T11:15:52.578133332Z" level=info msg="TearDown network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" successfully" Jan 29 11:15:52.578456 containerd[1464]: time="2025-01-29T11:15:52.578428607Z" level=info msg="StopPodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" returns successfully" Jan 29 11:15:52.580767 containerd[1464]: time="2025-01-29T11:15:52.579432541Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c99dc6fb-xsr5m,Uid:0c2b41a2-08c4-4b60-9a67-d75bd55f7076,Namespace:calico-system,Attempt:3,}" Jan 29 11:15:52.635362 kubelet[1802]: I0129 11:15:52.635154 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zppq6" podStartSLOduration=4.20280784 podStartE2EDuration="25.635136074s" podCreationTimestamp="2025-01-29 11:15:27 +0000 UTC" firstStartedPulling="2025-01-29 11:15:30.016473591 +0000 UTC m=+4.048814544" lastFinishedPulling="2025-01-29 11:15:52.02280649 +0000 UTC m=+25.481142778" observedRunningTime="2025-01-29 11:15:52.631631751 +0000 UTC m=+26.089968060" watchObservedRunningTime="2025-01-29 11:15:52.635136074 +0000 UTC m=+26.093472383" Jan 29 11:15:52.879051 containerd[1464]: time="2025-01-29T11:15:52.878996308Z" level=info msg="StopContainer for \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\" with timeout 5 (s)" Jan 29 11:15:52.881145 containerd[1464]: time="2025-01-29T11:15:52.880875343Z" level=info msg="Stop container \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\" with signal terminated" Jan 29 11:15:53.102530 kubelet[1802]: E0129 11:15:53.102456 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:53.132270 systemd-networkd[1376]: cali18889e5a753: Link UP Jan 29 11:15:53.132638 systemd-networkd[1376]: cali18889e5a753: Gained carrier Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.684 [INFO][2978] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.809 [INFO][2978] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.197-k8s-csi--node--driver--dx4jg-eth0 csi-node-driver- calico-system 17de0467-9566-43ce-a406-ef6b976cb6c5 1045 0 2025-01-29 11:15:27 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:56747c9949 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 143.198.151.197 csi-node-driver-dx4jg eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali18889e5a753 [] []}} ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Namespace="calico-system" Pod="csi-node-driver-dx4jg" WorkloadEndpoint="143.198.151.197-k8s-csi--node--driver--dx4jg-" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.809 [INFO][2978] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Namespace="calico-system" Pod="csi-node-driver-dx4jg" WorkloadEndpoint="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.951 [INFO][3060] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" HandleID="k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Workload="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.979 [INFO][3060] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" HandleID="k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Workload="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000250ac0), Attrs:map[string]string{"namespace":"calico-system", "node":"143.198.151.197", "pod":"csi-node-driver-dx4jg", "timestamp":"2025-01-29 11:15:52.951423 +0000 UTC"}, Hostname:"143.198.151.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.980 [INFO][3060] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.980 [INFO][3060] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.980 [INFO][3060] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.197' Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:52.998 [INFO][3060] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.070 [INFO][3060] ipam/ipam.go 372: Looking up existing affinities for host host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.081 [INFO][3060] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.085 [INFO][3060] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.090 [INFO][3060] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.091 [INFO][3060] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.094 [INFO][3060] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.103 [INFO][3060] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.114 [INFO][3060] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.65/26] block=192.168.42.64/26 handle="k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.114 [INFO][3060] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.65/26] handle="k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" host="143.198.151.197" Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.114 [INFO][3060] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:15:53.164629 containerd[1464]: 2025-01-29 11:15:53.114 [INFO][3060] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.65/26] IPv6=[] ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" HandleID="k8s-pod-network.afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Workload="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" Jan 29 11:15:53.169083 containerd[1464]: 2025-01-29 11:15:53.119 [INFO][2978] cni-plugin/k8s.go 386: Populated endpoint ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Namespace="calico-system" Pod="csi-node-driver-dx4jg" WorkloadEndpoint="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-csi--node--driver--dx4jg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17de0467-9566-43ce-a406-ef6b976cb6c5", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"", Pod:"csi-node-driver-dx4jg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18889e5a753", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:15:53.169083 containerd[1464]: 2025-01-29 11:15:53.120 [INFO][2978] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.65/32] ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Namespace="calico-system" Pod="csi-node-driver-dx4jg" WorkloadEndpoint="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" Jan 29 11:15:53.169083 containerd[1464]: 2025-01-29 11:15:53.120 [INFO][2978] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali18889e5a753 ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Namespace="calico-system" Pod="csi-node-driver-dx4jg" WorkloadEndpoint="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" Jan 29 11:15:53.169083 containerd[1464]: 2025-01-29 11:15:53.132 [INFO][2978] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Namespace="calico-system" Pod="csi-node-driver-dx4jg" WorkloadEndpoint="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" Jan 29 11:15:53.169083 containerd[1464]: 2025-01-29 11:15:53.133 [INFO][2978] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Namespace="calico-system" Pod="csi-node-driver-dx4jg" WorkloadEndpoint="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-csi--node--driver--dx4jg-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"17de0467-9566-43ce-a406-ef6b976cb6c5", ResourceVersion:"1045", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 15, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"56747c9949", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff", Pod:"csi-node-driver-dx4jg", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.42.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali18889e5a753", MAC:"5a:bd:5f:43:1d:97", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:15:53.169083 containerd[1464]: 2025-01-29 11:15:53.162 [INFO][2978] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff" Namespace="calico-system" Pod="csi-node-driver-dx4jg" WorkloadEndpoint="143.198.151.197-k8s-csi--node--driver--dx4jg-eth0" Jan 29 11:15:53.212214 containerd[1464]: time="2025-01-29T11:15:53.212070317Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:53.212454 containerd[1464]: time="2025-01-29T11:15:53.212160988Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:53.212454 containerd[1464]: time="2025-01-29T11:15:53.212192887Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:53.212454 containerd[1464]: time="2025-01-29T11:15:53.212309060Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:53.231483 systemd-networkd[1376]: cali807fb810345: Link UP Jan 29 11:15:53.235246 systemd-networkd[1376]: cali807fb810345: Gained carrier Jan 29 11:15:53.248970 systemd[1]: Started cri-containerd-afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff.scope - libcontainer container afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff. Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:52.803 [INFO][3015] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:52.869 [INFO][3015] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0 calico-kube-controllers-65c99dc6fb- calico-system 0c2b41a2-08c4-4b60-9a67-d75bd55f7076 1272 0 2025-01-29 11:15:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:65c99dc6fb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 143.198.151.197 calico-kube-controllers-65c99dc6fb-xsr5m eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali807fb810345 [] []}} ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Namespace="calico-system" Pod="calico-kube-controllers-65c99dc6fb-xsr5m" WorkloadEndpoint="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:52.869 [INFO][3015] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Namespace="calico-system" Pod="calico-kube-controllers-65c99dc6fb-xsr5m" WorkloadEndpoint="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.020 [INFO][3076] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" HandleID="k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Workload="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.079 [INFO][3076] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" HandleID="k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Workload="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00025d200), Attrs:map[string]string{"namespace":"calico-system", "node":"143.198.151.197", "pod":"calico-kube-controllers-65c99dc6fb-xsr5m", "timestamp":"2025-01-29 11:15:53.020566429 +0000 UTC"}, Hostname:"143.198.151.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.079 [INFO][3076] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.115 [INFO][3076] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.115 [INFO][3076] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.197' Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.119 [INFO][3076] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.171 [INFO][3076] ipam/ipam.go 372: Looking up existing affinities for host host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.182 [INFO][3076] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.188 [INFO][3076] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.194 [INFO][3076] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.194 [INFO][3076] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.200 [INFO][3076] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.210 [INFO][3076] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.220 [INFO][3076] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.66/26] block=192.168.42.64/26 handle="k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.221 [INFO][3076] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.66/26] handle="k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" host="143.198.151.197" Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.221 [INFO][3076] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:15:53.260995 containerd[1464]: 2025-01-29 11:15:53.221 [INFO][3076] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.66/26] IPv6=[] ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" HandleID="k8s-pod-network.e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Workload="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" Jan 29 11:15:53.263794 containerd[1464]: 2025-01-29 11:15:53.224 [INFO][3015] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Namespace="calico-system" Pod="calico-kube-controllers-65c99dc6fb-xsr5m" WorkloadEndpoint="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0", GenerateName:"calico-kube-controllers-65c99dc6fb-", Namespace:"calico-system", SelfLink:"", UID:"0c2b41a2-08c4-4b60-9a67-d75bd55f7076", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65c99dc6fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"", Pod:"calico-kube-controllers-65c99dc6fb-xsr5m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali807fb810345", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:15:53.263794 containerd[1464]: 2025-01-29 11:15:53.224 [INFO][3015] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.66/32] ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Namespace="calico-system" Pod="calico-kube-controllers-65c99dc6fb-xsr5m" WorkloadEndpoint="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" Jan 29 11:15:53.263794 containerd[1464]: 2025-01-29 11:15:53.224 [INFO][3015] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali807fb810345 ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Namespace="calico-system" Pod="calico-kube-controllers-65c99dc6fb-xsr5m" WorkloadEndpoint="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" Jan 29 11:15:53.263794 containerd[1464]: 2025-01-29 11:15:53.236 [INFO][3015] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Namespace="calico-system" Pod="calico-kube-controllers-65c99dc6fb-xsr5m" WorkloadEndpoint="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" Jan 29 11:15:53.263794 containerd[1464]: 2025-01-29 11:15:53.237 [INFO][3015] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Namespace="calico-system" Pod="calico-kube-controllers-65c99dc6fb-xsr5m" WorkloadEndpoint="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0", GenerateName:"calico-kube-controllers-65c99dc6fb-", Namespace:"calico-system", SelfLink:"", UID:"0c2b41a2-08c4-4b60-9a67-d75bd55f7076", ResourceVersion:"1272", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 15, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"65c99dc6fb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f", Pod:"calico-kube-controllers-65c99dc6fb-xsr5m", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.42.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali807fb810345", MAC:"4e:28:a9:f7:5d:a8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:15:53.263794 containerd[1464]: 2025-01-29 11:15:53.254 [INFO][3015] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f" Namespace="calico-system" Pod="calico-kube-controllers-65c99dc6fb-xsr5m" WorkloadEndpoint="143.198.151.197-k8s-calico--kube--controllers--65c99dc6fb--xsr5m-eth0" Jan 29 11:15:53.312333 containerd[1464]: time="2025-01-29T11:15:53.312185225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dx4jg,Uid:17de0467-9566-43ce-a406-ef6b976cb6c5,Namespace:calico-system,Attempt:8,} returns sandbox id \"afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff\"" Jan 29 11:15:53.323068 containerd[1464]: time="2025-01-29T11:15:53.322533372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:53.323068 containerd[1464]: time="2025-01-29T11:15:53.322791073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:53.323068 containerd[1464]: time="2025-01-29T11:15:53.322831741Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:53.324214 containerd[1464]: time="2025-01-29T11:15:53.324147262Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:53.345877 systemd-networkd[1376]: calic74e4046e20: Link UP Jan 29 11:15:53.347018 systemd-networkd[1376]: calic74e4046e20: Gained carrier Jan 29 11:15:53.362097 systemd[1]: Started cri-containerd-e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f.scope - libcontainer container e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f. Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:52.833 [INFO][3021] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:52.892 [INFO][3021] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0 nginx-deployment-8587fbcb89- default 32d19bf8-e39c-40fe-82db-979ee14e9bd5 1174 0 2025-01-29 11:15:45 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.198.151.197 nginx-deployment-8587fbcb89-hgrqg eth0 default [] [] [kns.default ksa.default.default] calic74e4046e20 [] []}} ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Namespace="default" Pod="nginx-deployment-8587fbcb89-hgrqg" WorkloadEndpoint="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:52.893 [INFO][3021] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Namespace="default" Pod="nginx-deployment-8587fbcb89-hgrqg" WorkloadEndpoint="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:52.987 [INFO][3079] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" HandleID="k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Workload="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.080 [INFO][3079] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" HandleID="k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Workload="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000319960), Attrs:map[string]string{"namespace":"default", "node":"143.198.151.197", "pod":"nginx-deployment-8587fbcb89-hgrqg", "timestamp":"2025-01-29 11:15:52.987043355 +0000 UTC"}, Hostname:"143.198.151.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.080 [INFO][3079] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.222 [INFO][3079] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.222 [INFO][3079] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.197' Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.227 [INFO][3079] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.270 [INFO][3079] ipam/ipam.go 372: Looking up existing affinities for host host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.283 [INFO][3079] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.290 [INFO][3079] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.297 [INFO][3079] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.298 [INFO][3079] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.304 [INFO][3079] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087 Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.318 [INFO][3079] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.334 [INFO][3079] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.67/26] block=192.168.42.64/26 handle="k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.334 [INFO][3079] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.67/26] handle="k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" host="143.198.151.197" Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.334 [INFO][3079] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:15:53.384474 containerd[1464]: 2025-01-29 11:15:53.334 [INFO][3079] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.67/26] IPv6=[] ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" HandleID="k8s-pod-network.c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Workload="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" Jan 29 11:15:53.386822 containerd[1464]: 2025-01-29 11:15:53.337 [INFO][3021] cni-plugin/k8s.go 386: Populated endpoint ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Namespace="default" Pod="nginx-deployment-8587fbcb89-hgrqg" WorkloadEndpoint="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"32d19bf8-e39c-40fe-82db-979ee14e9bd5", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 15, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-hgrqg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic74e4046e20", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:15:53.386822 containerd[1464]: 2025-01-29 11:15:53.338 [INFO][3021] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.67/32] ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Namespace="default" Pod="nginx-deployment-8587fbcb89-hgrqg" WorkloadEndpoint="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" Jan 29 11:15:53.386822 containerd[1464]: 2025-01-29 11:15:53.338 [INFO][3021] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic74e4046e20 ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Namespace="default" Pod="nginx-deployment-8587fbcb89-hgrqg" WorkloadEndpoint="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" Jan 29 11:15:53.386822 containerd[1464]: 2025-01-29 11:15:53.346 [INFO][3021] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Namespace="default" Pod="nginx-deployment-8587fbcb89-hgrqg" WorkloadEndpoint="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" Jan 29 11:15:53.386822 containerd[1464]: 2025-01-29 11:15:53.349 [INFO][3021] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Namespace="default" Pod="nginx-deployment-8587fbcb89-hgrqg" WorkloadEndpoint="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"32d19bf8-e39c-40fe-82db-979ee14e9bd5", ResourceVersion:"1174", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 15, 45, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087", Pod:"nginx-deployment-8587fbcb89-hgrqg", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.42.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calic74e4046e20", MAC:"16:6f:aa:17:4b:1a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:15:53.386822 containerd[1464]: 2025-01-29 11:15:53.371 [INFO][3021] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087" Namespace="default" Pod="nginx-deployment-8587fbcb89-hgrqg" WorkloadEndpoint="143.198.151.197-k8s-nginx--deployment--8587fbcb89--hgrqg-eth0" Jan 29 11:15:53.431836 containerd[1464]: time="2025-01-29T11:15:53.431602803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:53.431836 containerd[1464]: time="2025-01-29T11:15:53.431687805Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:53.432229 containerd[1464]: time="2025-01-29T11:15:53.432024328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:53.433790 containerd[1464]: time="2025-01-29T11:15:53.433375174Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:53.477217 systemd[1]: Started cri-containerd-c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087.scope - libcontainer container c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087. Jan 29 11:15:53.486889 containerd[1464]: time="2025-01-29T11:15:53.486498450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-65c99dc6fb-xsr5m,Uid:0c2b41a2-08c4-4b60-9a67-d75bd55f7076,Namespace:calico-system,Attempt:3,} returns sandbox id \"e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f\"" Jan 29 11:15:53.544082 containerd[1464]: time="2025-01-29T11:15:53.543937644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-hgrqg,Uid:32d19bf8-e39c-40fe-82db-979ee14e9bd5,Namespace:default,Attempt:7,} returns sandbox id \"c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087\"" Jan 29 11:15:54.102788 kubelet[1802]: E0129 11:15:54.102660 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:54.944932 systemd-networkd[1376]: cali18889e5a753: Gained IPv6LL Jan 29 11:15:55.073824 systemd-networkd[1376]: calic74e4046e20: Gained IPv6LL Jan 29 11:15:55.103045 kubelet[1802]: E0129 11:15:55.102973 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:55.265149 systemd-networkd[1376]: cali807fb810345: Gained IPv6LL Jan 29 11:15:55.399480 containerd[1464]: time="2025-01-29T11:15:55.399417010Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:55.401319 containerd[1464]: time="2025-01-29T11:15:55.401261739Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.29.1: active requests=0, bytes read=29850141" Jan 29 11:15:55.403083 containerd[1464]: time="2025-01-29T11:15:55.403016815Z" level=info msg="ImageCreate event name:\"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:55.406199 containerd[1464]: time="2025-01-29T11:15:55.406151964Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:55.407046 containerd[1464]: time="2025-01-29T11:15:55.407017046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.29.1\" with image id \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\", repo tag \"ghcr.io/flatcar/calico/typha:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:768a194e1115c73bcbf35edb7afd18a63e16e08d940c79993565b6a3cca2da7c\", size \"31343217\" in 3.383228522s" Jan 29 11:15:55.407118 containerd[1464]: time="2025-01-29T11:15:55.407053303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.29.1\" returns image reference \"sha256:4cb3738506f5a9c530033d1e24fd6b9ec618518a2ec8b012ded33572be06ab44\"" Jan 29 11:15:55.409520 containerd[1464]: time="2025-01-29T11:15:55.408804842Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Jan 29 11:15:55.428606 containerd[1464]: time="2025-01-29T11:15:55.428561258Z" level=info msg="CreateContainer within sandbox \"7167b6d934dcbb0292d3a171bb26ee8340faa932e87cb1d960a5019e8ed1f52e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 29 11:15:55.457559 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102385414.mount: Deactivated successfully. Jan 29 11:15:55.460742 containerd[1464]: time="2025-01-29T11:15:55.460420182Z" level=info msg="CreateContainer within sandbox \"7167b6d934dcbb0292d3a171bb26ee8340faa932e87cb1d960a5019e8ed1f52e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"bcd70a0f9ed292b31ef7715abe0042a2c042079eeffd8449c2d57c04630a757a\"" Jan 29 11:15:55.461614 containerd[1464]: time="2025-01-29T11:15:55.461562377Z" level=info msg="StartContainer for \"bcd70a0f9ed292b31ef7715abe0042a2c042079eeffd8449c2d57c04630a757a\"" Jan 29 11:15:55.523027 systemd[1]: Started cri-containerd-bcd70a0f9ed292b31ef7715abe0042a2c042079eeffd8449c2d57c04630a757a.scope - libcontainer container bcd70a0f9ed292b31ef7715abe0042a2c042079eeffd8449c2d57c04630a757a. Jan 29 11:15:55.635626 containerd[1464]: time="2025-01-29T11:15:55.635408563Z" level=info msg="StartContainer for \"bcd70a0f9ed292b31ef7715abe0042a2c042079eeffd8449c2d57c04630a757a\" returns successfully" Jan 29 11:15:56.104217 kubelet[1802]: E0129 11:15:56.104104 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:56.615015 kubelet[1802]: E0129 11:15:56.614981 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:56.665031 kubelet[1802]: I0129 11:15:56.664960 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-f8b88c8c-lcllq" podStartSLOduration=2.540394145 podStartE2EDuration="9.664867088s" podCreationTimestamp="2025-01-29 11:15:47 +0000 UTC" firstStartedPulling="2025-01-29 11:15:48.284163336 +0000 UTC m=+21.742499627" lastFinishedPulling="2025-01-29 11:15:55.408636283 +0000 UTC m=+28.866972570" observedRunningTime="2025-01-29 11:15:56.645921306 +0000 UTC m=+30.104257614" watchObservedRunningTime="2025-01-29 11:15:56.664867088 +0000 UTC m=+30.123203388" Jan 29 11:15:57.057281 update_engine[1450]: I20250129 11:15:57.057169 1450 update_attempter.cc:509] Updating boot flags... Jan 29 11:15:57.092759 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (3462) Jan 29 11:15:57.104340 kubelet[1802]: E0129 11:15:57.104299 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:57.157819 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (3462) Jan 29 11:15:57.237815 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 44 scanned by (udev-worker) (3462) Jan 29 11:15:57.616400 kubelet[1802]: E0129 11:15:57.616012 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:57.626753 containerd[1464]: time="2025-01-29T11:15:57.626141175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:57.627464 containerd[1464]: time="2025-01-29T11:15:57.627421090Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Jan 29 11:15:57.628518 containerd[1464]: time="2025-01-29T11:15:57.628442582Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:57.630670 containerd[1464]: time="2025-01-29T11:15:57.630598325Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:15:57.632154 containerd[1464]: time="2025-01-29T11:15:57.631548856Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 2.222710647s" Jan 29 11:15:57.632154 containerd[1464]: time="2025-01-29T11:15:57.631583810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Jan 29 11:15:57.633745 containerd[1464]: time="2025-01-29T11:15:57.633552444Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\"" Jan 29 11:15:57.635393 containerd[1464]: time="2025-01-29T11:15:57.635205932Z" level=info msg="CreateContainer within sandbox \"afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Jan 29 11:15:57.657287 containerd[1464]: time="2025-01-29T11:15:57.657109956Z" level=info msg="CreateContainer within sandbox \"afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"82a168689e3bb15f1124ae21379a9dc81e5f412de0161b90ad9f1acd02540a2b\"" Jan 29 11:15:57.659586 containerd[1464]: time="2025-01-29T11:15:57.659250844Z" level=info msg="StartContainer for \"82a168689e3bb15f1124ae21379a9dc81e5f412de0161b90ad9f1acd02540a2b\"" Jan 29 11:15:57.717146 systemd[1]: Started cri-containerd-82a168689e3bb15f1124ae21379a9dc81e5f412de0161b90ad9f1acd02540a2b.scope - libcontainer container 82a168689e3bb15f1124ae21379a9dc81e5f412de0161b90ad9f1acd02540a2b. Jan 29 11:15:57.771370 containerd[1464]: time="2025-01-29T11:15:57.770627541Z" level=info msg="StartContainer for \"82a168689e3bb15f1124ae21379a9dc81e5f412de0161b90ad9f1acd02540a2b\" returns successfully" Jan 29 11:15:57.866765 kernel: bpftool[3533]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 29 11:15:57.921695 containerd[1464]: time="2025-01-29T11:15:57.921026319Z" level=info msg="Kill container \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\"" Jan 29 11:15:57.948643 systemd[1]: cri-containerd-f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79.scope: Deactivated successfully. Jan 29 11:15:57.949489 systemd[1]: cri-containerd-f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79.scope: Consumed 1.816s CPU time. Jan 29 11:15:57.987746 containerd[1464]: time="2025-01-29T11:15:57.986264221Z" level=info msg="shim disconnected" id=f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79 namespace=k8s.io Jan 29 11:15:57.987746 containerd[1464]: time="2025-01-29T11:15:57.986342751Z" level=warning msg="cleaning up after shim disconnected" id=f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79 namespace=k8s.io Jan 29 11:15:57.987746 containerd[1464]: time="2025-01-29T11:15:57.986355487Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:57.987651 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79-rootfs.mount: Deactivated successfully. Jan 29 11:15:58.042882 containerd[1464]: time="2025-01-29T11:15:58.042785952Z" level=info msg="StopContainer for \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\" returns successfully" Jan 29 11:15:58.043746 containerd[1464]: time="2025-01-29T11:15:58.043625349Z" level=info msg="StopPodSandbox for \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\"" Jan 29 11:15:58.043746 containerd[1464]: time="2025-01-29T11:15:58.043695859Z" level=info msg="Container to stop \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:58.043746 containerd[1464]: time="2025-01-29T11:15:58.043760117Z" level=info msg="Container to stop \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:58.043746 containerd[1464]: time="2025-01-29T11:15:58.043775315Z" level=info msg="Container to stop \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jan 29 11:15:58.047430 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71-shm.mount: Deactivated successfully. Jan 29 11:15:58.052166 systemd[1]: cri-containerd-4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71.scope: Deactivated successfully. Jan 29 11:15:58.076695 containerd[1464]: time="2025-01-29T11:15:58.076478450Z" level=info msg="shim disconnected" id=4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71 namespace=k8s.io Jan 29 11:15:58.076695 containerd[1464]: time="2025-01-29T11:15:58.076535010Z" level=warning msg="cleaning up after shim disconnected" id=4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71 namespace=k8s.io Jan 29 11:15:58.076695 containerd[1464]: time="2025-01-29T11:15:58.076543329Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:58.101697 containerd[1464]: time="2025-01-29T11:15:58.101414108Z" level=info msg="TearDown network for sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" successfully" Jan 29 11:15:58.101697 containerd[1464]: time="2025-01-29T11:15:58.101456687Z" level=info msg="StopPodSandbox for \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" returns successfully" Jan 29 11:15:58.108030 kubelet[1802]: E0129 11:15:58.107981 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:58.182255 kubelet[1802]: E0129 11:15:58.182081 1802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="906fa76e-f7b7-43cd-a33c-1b0e711ee459" containerName="flexvol-driver" Jan 29 11:15:58.182255 kubelet[1802]: E0129 11:15:58.182119 1802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="906fa76e-f7b7-43cd-a33c-1b0e711ee459" containerName="install-cni" Jan 29 11:15:58.182255 kubelet[1802]: E0129 11:15:58.182126 1802 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="906fa76e-f7b7-43cd-a33c-1b0e711ee459" containerName="calico-node" Jan 29 11:15:58.182255 kubelet[1802]: I0129 11:15:58.182148 1802 memory_manager.go:354] "RemoveStaleState removing state" podUID="906fa76e-f7b7-43cd-a33c-1b0e711ee459" containerName="calico-node" Jan 29 11:15:58.190631 systemd[1]: Created slice kubepods-besteffort-pod899ffc51_a054_4878_a834_3b034cab1edc.slice - libcontainer container kubepods-besteffort-pod899ffc51_a054_4878_a834_3b034cab1edc.slice. Jan 29 11:15:58.237240 kubelet[1802]: I0129 11:15:58.237188 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/906fa76e-f7b7-43cd-a33c-1b0e711ee459-node-certs\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.237483 kubelet[1802]: I0129 11:15:58.237467 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/906fa76e-f7b7-43cd-a33c-1b0e711ee459-tigera-ca-bundle\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.237648 kubelet[1802]: I0129 11:15:58.237635 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-log-dir\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.237781 kubelet[1802]: I0129 11:15:58.237763 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j9vfh\" (UniqueName: \"kubernetes.io/projected/906fa76e-f7b7-43cd-a33c-1b0e711ee459-kube-api-access-j9vfh\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.237864 kubelet[1802]: I0129 11:15:58.237854 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-var-run-calico\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.237948 kubelet[1802]: I0129 11:15:58.237934 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-xtables-lock\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.238072 kubelet[1802]: I0129 11:15:58.238054 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-flexvol-driver-host\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.238217 kubelet[1802]: I0129 11:15:58.238195 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-lib-modules\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.238335 kubelet[1802]: I0129 11:15:58.238321 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-policysync\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.238445 kubelet[1802]: I0129 11:15:58.238432 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-var-lib-calico\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.238674 kubelet[1802]: I0129 11:15:58.238656 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-bin-dir\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.238857 kubelet[1802]: I0129 11:15:58.238811 1802 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-net-dir\") pod \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\" (UID: \"906fa76e-f7b7-43cd-a33c-1b0e711ee459\") " Jan 29 11:15:58.239187 kubelet[1802]: I0129 11:15:58.239047 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-net-dir" (OuterVolumeSpecName: "cni-net-dir") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "cni-net-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.240055 kubelet[1802]: I0129 11:15:58.239949 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-flexvol-driver-host" (OuterVolumeSpecName: "flexvol-driver-host") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "flexvol-driver-host". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.240055 kubelet[1802]: I0129 11:15:58.240012 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-log-dir" (OuterVolumeSpecName: "cni-log-dir") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "cni-log-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.240359 kubelet[1802]: I0129 11:15:58.240226 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.240359 kubelet[1802]: I0129 11:15:58.240260 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-policysync" (OuterVolumeSpecName: "policysync") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "policysync". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.240359 kubelet[1802]: I0129 11:15:58.240281 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.240359 kubelet[1802]: I0129 11:15:58.240305 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-bin-dir" (OuterVolumeSpecName: "cni-bin-dir") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "cni-bin-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.240792 kubelet[1802]: I0129 11:15:58.240689 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-var-run-calico" (OuterVolumeSpecName: "var-run-calico") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "var-run-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.240792 kubelet[1802]: I0129 11:15:58.240766 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 29 11:15:58.244041 kubelet[1802]: I0129 11:15:58.243879 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/906fa76e-f7b7-43cd-a33c-1b0e711ee459-node-certs" (OuterVolumeSpecName: "node-certs") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "node-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 29 11:15:58.246269 kubelet[1802]: I0129 11:15:58.246182 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/906fa76e-f7b7-43cd-a33c-1b0e711ee459-kube-api-access-j9vfh" (OuterVolumeSpecName: "kube-api-access-j9vfh") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "kube-api-access-j9vfh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 29 11:15:58.246836 kubelet[1802]: I0129 11:15:58.246791 1802 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/906fa76e-f7b7-43cd-a33c-1b0e711ee459-tigera-ca-bundle" (OuterVolumeSpecName: "tigera-ca-bundle") pod "906fa76e-f7b7-43cd-a33c-1b0e711ee459" (UID: "906fa76e-f7b7-43cd-a33c-1b0e711ee459"). InnerVolumeSpecName "tigera-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 29 11:15:58.339512 kubelet[1802]: I0129 11:15:58.339435 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-cni-log-dir\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.339512 kubelet[1802]: I0129 11:15:58.339512 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-policysync\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.339848 kubelet[1802]: I0129 11:15:58.339539 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-lib-modules\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.339848 kubelet[1802]: I0129 11:15:58.339561 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-xtables-lock\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.339848 kubelet[1802]: I0129 11:15:58.339584 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-cni-bin-dir\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.339848 kubelet[1802]: I0129 11:15:58.339608 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rx555\" (UniqueName: \"kubernetes.io/projected/899ffc51-a054-4878-a834-3b034cab1edc-kube-api-access-rx555\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.339848 kubelet[1802]: I0129 11:15:58.339637 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-var-run-calico\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.340089 kubelet[1802]: I0129 11:15:58.339662 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-flexvol-driver-host\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.340089 kubelet[1802]: I0129 11:15:58.339686 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-cni-net-dir\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.340089 kubelet[1802]: I0129 11:15:58.339728 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/899ffc51-a054-4878-a834-3b034cab1edc-node-certs\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.340089 kubelet[1802]: I0129 11:15:58.339753 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/899ffc51-a054-4878-a834-3b034cab1edc-var-lib-calico\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.340089 kubelet[1802]: I0129 11:15:58.339778 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/899ffc51-a054-4878-a834-3b034cab1edc-tigera-ca-bundle\") pod \"calico-node-l67s5\" (UID: \"899ffc51-a054-4878-a834-3b034cab1edc\") " pod="calico-system/calico-node-l67s5" Jan 29 11:15:58.340089 kubelet[1802]: I0129 11:15:58.339807 1802 reconciler_common.go:288] "Volume detached for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/906fa76e-f7b7-43cd-a33c-1b0e711ee459-node-certs\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340347 kubelet[1802]: I0129 11:15:58.339822 1802 reconciler_common.go:288] "Volume detached for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/906fa76e-f7b7-43cd-a33c-1b0e711ee459-tigera-ca-bundle\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340347 kubelet[1802]: I0129 11:15:58.339835 1802 reconciler_common.go:288] "Volume detached for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-log-dir\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340347 kubelet[1802]: I0129 11:15:58.339850 1802 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-j9vfh\" (UniqueName: \"kubernetes.io/projected/906fa76e-f7b7-43cd-a33c-1b0e711ee459-kube-api-access-j9vfh\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340347 kubelet[1802]: I0129 11:15:58.339863 1802 reconciler_common.go:288] "Volume detached for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-var-run-calico\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340347 kubelet[1802]: I0129 11:15:58.339876 1802 reconciler_common.go:288] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-xtables-lock\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340347 kubelet[1802]: I0129 11:15:58.339894 1802 reconciler_common.go:288] "Volume detached for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-flexvol-driver-host\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340347 kubelet[1802]: I0129 11:15:58.339907 1802 reconciler_common.go:288] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-lib-modules\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340347 kubelet[1802]: I0129 11:15:58.339920 1802 reconciler_common.go:288] "Volume detached for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-policysync\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340695 kubelet[1802]: I0129 11:15:58.339932 1802 reconciler_common.go:288] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-var-lib-calico\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340695 kubelet[1802]: I0129 11:15:58.339944 1802 reconciler_common.go:288] "Volume detached for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-bin-dir\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.340695 kubelet[1802]: I0129 11:15:58.339956 1802 reconciler_common.go:288] "Volume detached for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/906fa76e-f7b7-43cd-a33c-1b0e711ee459-cni-net-dir\") on node \"143.198.151.197\" DevicePath \"\"" Jan 29 11:15:58.497006 kubelet[1802]: E0129 11:15:58.496144 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:58.498231 containerd[1464]: time="2025-01-29T11:15:58.497254629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l67s5,Uid:899ffc51-a054-4878-a834-3b034cab1edc,Namespace:calico-system,Attempt:0,}" Jan 29 11:15:58.534140 containerd[1464]: time="2025-01-29T11:15:58.533650831Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:15:58.534140 containerd[1464]: time="2025-01-29T11:15:58.533816789Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:15:58.534140 containerd[1464]: time="2025-01-29T11:15:58.533838796Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:58.534140 containerd[1464]: time="2025-01-29T11:15:58.533969105Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:15:58.552968 systemd[1]: Started cri-containerd-f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2.scope - libcontainer container f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2. Jan 29 11:15:58.595980 containerd[1464]: time="2025-01-29T11:15:58.595926880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-l67s5,Uid:899ffc51-a054-4878-a834-3b034cab1edc,Namespace:calico-system,Attempt:0,} returns sandbox id \"f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2\"" Jan 29 11:15:58.598176 kubelet[1802]: E0129 11:15:58.597695 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:58.601853 containerd[1464]: time="2025-01-29T11:15:58.601801486Z" level=info msg="CreateContainer within sandbox \"f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 29 11:15:58.621829 containerd[1464]: time="2025-01-29T11:15:58.621756578Z" level=info msg="CreateContainer within sandbox \"f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2\"" Jan 29 11:15:58.622757 containerd[1464]: time="2025-01-29T11:15:58.622703642Z" level=info msg="StartContainer for \"4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2\"" Jan 29 11:15:58.633288 kubelet[1802]: I0129 11:15:58.633245 1802 scope.go:117] "RemoveContainer" containerID="f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79" Jan 29 11:15:58.635053 kubelet[1802]: E0129 11:15:58.634267 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:58.639161 containerd[1464]: time="2025-01-29T11:15:58.639108330Z" level=info msg="RemoveContainer for \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\"" Jan 29 11:15:58.644256 systemd[1]: Removed slice kubepods-besteffort-pod906fa76e_f7b7_43cd_a33c_1b0e711ee459.slice - libcontainer container kubepods-besteffort-pod906fa76e_f7b7_43cd_a33c_1b0e711ee459.slice. Jan 29 11:15:58.645338 containerd[1464]: time="2025-01-29T11:15:58.644545713Z" level=info msg="RemoveContainer for \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\" returns successfully" Jan 29 11:15:58.645473 systemd[1]: kubepods-besteffort-pod906fa76e_f7b7_43cd_a33c_1b0e711ee459.slice: Consumed 2.656s CPU time. Jan 29 11:15:58.648615 kubelet[1802]: I0129 11:15:58.648395 1802 scope.go:117] "RemoveContainer" containerID="a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4" Jan 29 11:15:58.658416 containerd[1464]: time="2025-01-29T11:15:58.657026889Z" level=info msg="RemoveContainer for \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\"" Jan 29 11:15:58.665959 systemd[1]: var-lib-kubelet-pods-906fa76e\x2df7b7\x2d43cd\x2da33c\x2d1b0e711ee459-volume\x2dsubpaths-tigera\x2dca\x2dbundle-calico\x2dnode-1.mount: Deactivated successfully. Jan 29 11:15:58.666097 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71-rootfs.mount: Deactivated successfully. Jan 29 11:15:58.666178 systemd[1]: var-lib-kubelet-pods-906fa76e\x2df7b7\x2d43cd\x2da33c\x2d1b0e711ee459-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dj9vfh.mount: Deactivated successfully. Jan 29 11:15:58.666244 systemd[1]: var-lib-kubelet-pods-906fa76e\x2df7b7\x2d43cd\x2da33c\x2d1b0e711ee459-volumes-kubernetes.io\x7esecret-node\x2dcerts.mount: Deactivated successfully. Jan 29 11:15:58.680575 containerd[1464]: time="2025-01-29T11:15:58.680292659Z" level=info msg="RemoveContainer for \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\" returns successfully" Jan 29 11:15:58.682012 kubelet[1802]: I0129 11:15:58.681802 1802 scope.go:117] "RemoveContainer" containerID="87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b" Jan 29 11:15:58.686065 containerd[1464]: time="2025-01-29T11:15:58.685977456Z" level=info msg="RemoveContainer for \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\"" Jan 29 11:15:58.697729 containerd[1464]: time="2025-01-29T11:15:58.697591979Z" level=info msg="RemoveContainer for \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\" returns successfully" Jan 29 11:15:58.699661 kubelet[1802]: I0129 11:15:58.698381 1802 scope.go:117] "RemoveContainer" containerID="f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79" Jan 29 11:15:58.702233 containerd[1464]: time="2025-01-29T11:15:58.702160144Z" level=error msg="ContainerStatus for \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\": not found" Jan 29 11:15:58.702454 kubelet[1802]: E0129 11:15:58.702419 1802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\": not found" containerID="f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79" Jan 29 11:15:58.702785 kubelet[1802]: I0129 11:15:58.702471 1802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79"} err="failed to get container status \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\": rpc error: code = NotFound desc = an error occurred when try to find container \"f605a409c3baf2339d89156c4fc8e1f1ba179aab267686b7998abdb10d5cde79\": not found" Jan 29 11:15:58.702785 kubelet[1802]: I0129 11:15:58.702534 1802 scope.go:117] "RemoveContainer" containerID="a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4" Jan 29 11:15:58.703492 containerd[1464]: time="2025-01-29T11:15:58.702935674Z" level=error msg="ContainerStatus for \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\": not found" Jan 29 11:15:58.703612 kubelet[1802]: E0129 11:15:58.703118 1802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\": not found" containerID="a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4" Jan 29 11:15:58.703612 kubelet[1802]: I0129 11:15:58.703142 1802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4"} err="failed to get container status \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\": rpc error: code = NotFound desc = an error occurred when try to find container \"a274e65437928a68fbe1a2f79c74312df27af4c97b0bcedca169b20826e3ccb4\": not found" Jan 29 11:15:58.703612 kubelet[1802]: I0129 11:15:58.703163 1802 scope.go:117] "RemoveContainer" containerID="87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b" Jan 29 11:15:58.704642 containerd[1464]: time="2025-01-29T11:15:58.704552601Z" level=error msg="ContainerStatus for \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\": not found" Jan 29 11:15:58.705220 kubelet[1802]: E0129 11:15:58.705119 1802 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\": not found" containerID="87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b" Jan 29 11:15:58.705220 kubelet[1802]: I0129 11:15:58.705152 1802 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b"} err="failed to get container status \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\": rpc error: code = NotFound desc = an error occurred when try to find container \"87bbf7460fd0e33a158abfa139e56cfda31487af1728b5ddde635ec9334f7f0b\": not found" Jan 29 11:15:58.719032 systemd[1]: Started cri-containerd-4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2.scope - libcontainer container 4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2. Jan 29 11:15:58.767552 containerd[1464]: time="2025-01-29T11:15:58.767307203Z" level=info msg="StartContainer for \"4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2\" returns successfully" Jan 29 11:15:58.807177 systemd[1]: cri-containerd-4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2.scope: Deactivated successfully. Jan 29 11:15:58.838512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2-rootfs.mount: Deactivated successfully. Jan 29 11:15:58.850477 containerd[1464]: time="2025-01-29T11:15:58.850413559Z" level=info msg="shim disconnected" id=4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2 namespace=k8s.io Jan 29 11:15:58.851086 containerd[1464]: time="2025-01-29T11:15:58.850768092Z" level=warning msg="cleaning up after shim disconnected" id=4d8645de5a175fb9142cfb2665c6a36467aa703a66df406f213ee9710749fcb2 namespace=k8s.io Jan 29 11:15:58.851086 containerd[1464]: time="2025-01-29T11:15:58.850787927Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:15:59.108731 kubelet[1802]: E0129 11:15:59.108610 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:15:59.258636 kubelet[1802]: I0129 11:15:59.258571 1802 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="906fa76e-f7b7-43cd-a33c-1b0e711ee459" path="/var/lib/kubelet/pods/906fa76e-f7b7-43cd-a33c-1b0e711ee459/volumes" Jan 29 11:15:59.639641 kubelet[1802]: E0129 11:15:59.639573 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:15:59.642638 containerd[1464]: time="2025-01-29T11:15:59.642449182Z" level=info msg="CreateContainer within sandbox \"f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 29 11:15:59.663835 containerd[1464]: time="2025-01-29T11:15:59.663662706Z" level=info msg="CreateContainer within sandbox \"f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066\"" Jan 29 11:15:59.665003 containerd[1464]: time="2025-01-29T11:15:59.664948692Z" level=info msg="StartContainer for \"9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066\"" Jan 29 11:15:59.727052 systemd[1]: Started cri-containerd-9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066.scope - libcontainer container 9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066. Jan 29 11:15:59.786553 containerd[1464]: time="2025-01-29T11:15:59.785813298Z" level=info msg="StartContainer for \"9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066\" returns successfully" Jan 29 11:16:00.110830 kubelet[1802]: E0129 11:16:00.110271 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:00.672466 kubelet[1802]: E0129 11:16:00.672266 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:16:01.111469 kubelet[1802]: E0129 11:16:01.111206 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:01.294417 containerd[1464]: time="2025-01-29T11:16:01.292559273Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:01.297343 containerd[1464]: time="2025-01-29T11:16:01.296691575Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.29.1: active requests=0, bytes read=34141192" Jan 29 11:16:01.298778 containerd[1464]: time="2025-01-29T11:16:01.298673542Z" level=info msg="ImageCreate event name:\"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:01.305927 containerd[1464]: time="2025-01-29T11:16:01.305863598Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:01.308432 containerd[1464]: time="2025-01-29T11:16:01.307005129Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" with image id \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\", repo tag \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/kube-controllers@sha256:1072d6a98167a14ca361e9ce757733f9bae36d1f1c6a9621ea10934b6b1e10d9\", size \"35634244\" in 3.673415981s" Jan 29 11:16:01.308432 containerd[1464]: time="2025-01-29T11:16:01.307054449Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.29.1\" returns image reference \"sha256:6331715a2ae96b18a770a395cac108321d108e445e08b616e5bc9fbd1f9c21da\"" Jan 29 11:16:01.311260 containerd[1464]: time="2025-01-29T11:16:01.311138700Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:16:01.344392 containerd[1464]: time="2025-01-29T11:16:01.343202113Z" level=info msg="CreateContainer within sandbox \"e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f\" for container &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,}" Jan 29 11:16:01.372515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3539809504.mount: Deactivated successfully. Jan 29 11:16:01.377475 containerd[1464]: time="2025-01-29T11:16:01.377403788Z" level=info msg="CreateContainer within sandbox \"e76013cf360502e7dc3276451e014adac4ae4ee34e3ed04b3c203ba85b683d6f\" for &ContainerMetadata{Name:calico-kube-controllers,Attempt:0,} returns container id \"779868c62175e73affc53d1cf35c13d6e1bf8a95c02e91c75c80cd77f614e6f1\"" Jan 29 11:16:01.378519 containerd[1464]: time="2025-01-29T11:16:01.378458341Z" level=info msg="StartContainer for \"779868c62175e73affc53d1cf35c13d6e1bf8a95c02e91c75c80cd77f614e6f1\"" Jan 29 11:16:01.461326 systemd[1]: Started cri-containerd-779868c62175e73affc53d1cf35c13d6e1bf8a95c02e91c75c80cd77f614e6f1.scope - libcontainer container 779868c62175e73affc53d1cf35c13d6e1bf8a95c02e91c75c80cd77f614e6f1. Jan 29 11:16:01.531572 systemd[1]: cri-containerd-9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066.scope: Deactivated successfully. Jan 29 11:16:01.532311 systemd[1]: cri-containerd-9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066.scope: Consumed 1.088s CPU time. Jan 29 11:16:01.570486 containerd[1464]: time="2025-01-29T11:16:01.570249448Z" level=info msg="StartContainer for \"779868c62175e73affc53d1cf35c13d6e1bf8a95c02e91c75c80cd77f614e6f1\" returns successfully" Jan 29 11:16:01.615762 containerd[1464]: time="2025-01-29T11:16:01.615554690Z" level=info msg="shim disconnected" id=9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066 namespace=k8s.io Jan 29 11:16:01.615762 containerd[1464]: time="2025-01-29T11:16:01.615755057Z" level=warning msg="cleaning up after shim disconnected" id=9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066 namespace=k8s.io Jan 29 11:16:01.616118 containerd[1464]: time="2025-01-29T11:16:01.615788586Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 29 11:16:01.699684 kubelet[1802]: E0129 11:16:01.698564 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:16:01.724213 containerd[1464]: time="2025-01-29T11:16:01.723856562Z" level=info msg="CreateContainer within sandbox \"f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 29 11:16:01.769288 containerd[1464]: time="2025-01-29T11:16:01.769072432Z" level=info msg="CreateContainer within sandbox \"f58d68f234dd3c30de55056e116daa34d925bdecb5e0189478040e6b5b0416a2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"ffc78e4a62b0b1ca6e10122f442d0eb9d0e267719dbc4af2c4e78390d27e35d4\"" Jan 29 11:16:01.770605 containerd[1464]: time="2025-01-29T11:16:01.770513394Z" level=info msg="StartContainer for \"ffc78e4a62b0b1ca6e10122f442d0eb9d0e267719dbc4af2c4e78390d27e35d4\"" Jan 29 11:16:01.787030 kubelet[1802]: I0129 11:16:01.786453 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-kube-controllers-65c99dc6fb-xsr5m" podStartSLOduration=4.967035382 podStartE2EDuration="12.7864058s" podCreationTimestamp="2025-01-29 11:15:49 +0000 UTC" firstStartedPulling="2025-01-29 11:15:53.491198477 +0000 UTC m=+26.949534773" lastFinishedPulling="2025-01-29 11:16:01.310568884 +0000 UTC m=+34.768905191" observedRunningTime="2025-01-29 11:16:01.745731578 +0000 UTC m=+35.204067920" watchObservedRunningTime="2025-01-29 11:16:01.7864058 +0000 UTC m=+35.244742112" Jan 29 11:16:01.838513 systemd[1]: Started cri-containerd-ffc78e4a62b0b1ca6e10122f442d0eb9d0e267719dbc4af2c4e78390d27e35d4.scope - libcontainer container ffc78e4a62b0b1ca6e10122f442d0eb9d0e267719dbc4af2c4e78390d27e35d4. Jan 29 11:16:01.953146 containerd[1464]: time="2025-01-29T11:16:01.952099451Z" level=info msg="StartContainer for \"ffc78e4a62b0b1ca6e10122f442d0eb9d0e267719dbc4af2c4e78390d27e35d4\" returns successfully" Jan 29 11:16:02.112035 kubelet[1802]: E0129 11:16:02.111929 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:02.342244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9225548e98b33382660e2d5c7976e5c5671f13af9d07ede188a7555127190066-rootfs.mount: Deactivated successfully. Jan 29 11:16:02.708255 kubelet[1802]: I0129 11:16:02.708086 1802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:16:02.709766 kubelet[1802]: E0129 11:16:02.709731 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:16:02.764269 systemd[1]: run-containerd-runc-k8s.io-ffc78e4a62b0b1ca6e10122f442d0eb9d0e267719dbc4af2c4e78390d27e35d4-runc.XbdUPk.mount: Deactivated successfully. Jan 29 11:16:03.112497 kubelet[1802]: E0129 11:16:03.112428 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:03.755064 kubelet[1802]: E0129 11:16:03.752344 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:16:03.891436 systemd[1]: run-containerd-runc-k8s.io-ffc78e4a62b0b1ca6e10122f442d0eb9d0e267719dbc4af2c4e78390d27e35d4-runc.yidr6t.mount: Deactivated successfully. Jan 29 11:16:04.113319 kubelet[1802]: E0129 11:16:04.113245 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:05.040147 systemd-networkd[1376]: vxlan.calico: Link UP Jan 29 11:16:05.040161 systemd-networkd[1376]: vxlan.calico: Gained carrier Jan 29 11:16:05.114824 kubelet[1802]: E0129 11:16:05.113477 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:06.115407 kubelet[1802]: E0129 11:16:06.115277 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:06.323964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount692712641.mount: Deactivated successfully. Jan 29 11:16:06.913041 systemd-networkd[1376]: vxlan.calico: Gained IPv6LL Jan 29 11:16:07.075964 kubelet[1802]: E0129 11:16:07.075866 1802 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:07.115905 kubelet[1802]: E0129 11:16:07.115816 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:07.783121 containerd[1464]: time="2025-01-29T11:16:07.783026554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:07.784296 containerd[1464]: time="2025-01-29T11:16:07.784243900Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=71015561" Jan 29 11:16:07.785799 containerd[1464]: time="2025-01-29T11:16:07.785557531Z" level=info msg="ImageCreate event name:\"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:07.790433 containerd[1464]: time="2025-01-29T11:16:07.790042749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:07.792299 containerd[1464]: time="2025-01-29T11:16:07.792262547Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 6.481073416s" Jan 29 11:16:07.792775 containerd[1464]: time="2025-01-29T11:16:07.792598312Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:16:07.794407 containerd[1464]: time="2025-01-29T11:16:07.793996255Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Jan 29 11:16:07.805906 containerd[1464]: time="2025-01-29T11:16:07.805867328Z" level=info msg="CreateContainer within sandbox \"c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Jan 29 11:16:07.829738 containerd[1464]: time="2025-01-29T11:16:07.828800545Z" level=info msg="CreateContainer within sandbox \"c11ffecc0edc3995bcb47ee9badcd3a6c431cd384b37a48f7e1e483d30a2b087\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"0dd83aad3024615baeca227bc00ed168ff77a21e0610a9bf6b0c9baed3c4e6be\"" Jan 29 11:16:07.830166 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1682801567.mount: Deactivated successfully. Jan 29 11:16:07.832297 containerd[1464]: time="2025-01-29T11:16:07.830991905Z" level=info msg="StartContainer for \"0dd83aad3024615baeca227bc00ed168ff77a21e0610a9bf6b0c9baed3c4e6be\"" Jan 29 11:16:07.879988 systemd[1]: Started cri-containerd-0dd83aad3024615baeca227bc00ed168ff77a21e0610a9bf6b0c9baed3c4e6be.scope - libcontainer container 0dd83aad3024615baeca227bc00ed168ff77a21e0610a9bf6b0c9baed3c4e6be. Jan 29 11:16:07.922457 containerd[1464]: time="2025-01-29T11:16:07.922405063Z" level=info msg="StartContainer for \"0dd83aad3024615baeca227bc00ed168ff77a21e0610a9bf6b0c9baed3c4e6be\" returns successfully" Jan 29 11:16:08.116086 kubelet[1802]: E0129 11:16:08.116008 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:08.784948 kubelet[1802]: I0129 11:16:08.784549 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-l67s5" podStartSLOduration=10.78452977 podStartE2EDuration="10.78452977s" podCreationTimestamp="2025-01-29 11:15:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-01-29 11:16:02.735458743 +0000 UTC m=+36.193795054" watchObservedRunningTime="2025-01-29 11:16:08.78452977 +0000 UTC m=+42.242866070" Jan 29 11:16:09.116549 kubelet[1802]: E0129 11:16:09.116380 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:10.117316 kubelet[1802]: E0129 11:16:10.117235 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:10.129032 containerd[1464]: time="2025-01-29T11:16:10.127950175Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:10.129032 containerd[1464]: time="2025-01-29T11:16:10.128974354Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Jan 29 11:16:10.129983 containerd[1464]: time="2025-01-29T11:16:10.129914570Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:10.132827 containerd[1464]: time="2025-01-29T11:16:10.132746441Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:10.134240 containerd[1464]: time="2025-01-29T11:16:10.133538738Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 2.339511972s" Jan 29 11:16:10.134240 containerd[1464]: time="2025-01-29T11:16:10.133581242Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Jan 29 11:16:10.138371 containerd[1464]: time="2025-01-29T11:16:10.137965542Z" level=info msg="CreateContainer within sandbox \"afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Jan 29 11:16:10.163297 containerd[1464]: time="2025-01-29T11:16:10.163221416Z" level=info msg="CreateContainer within sandbox \"afdb88dc71461429ad73dadb70900f7fe6bf1d11e300aae85fa9f91cf6db29ff\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"6f16de33399aca7f62ddb6ffc2f320511dbbfb50aa6a4b2ada6750da3ef30ab8\"" Jan 29 11:16:10.164219 containerd[1464]: time="2025-01-29T11:16:10.164176182Z" level=info msg="StartContainer for \"6f16de33399aca7f62ddb6ffc2f320511dbbfb50aa6a4b2ada6750da3ef30ab8\"" Jan 29 11:16:10.215022 systemd[1]: Started cri-containerd-6f16de33399aca7f62ddb6ffc2f320511dbbfb50aa6a4b2ada6750da3ef30ab8.scope - libcontainer container 6f16de33399aca7f62ddb6ffc2f320511dbbfb50aa6a4b2ada6750da3ef30ab8. Jan 29 11:16:10.257978 containerd[1464]: time="2025-01-29T11:16:10.257906791Z" level=info msg="StartContainer for \"6f16de33399aca7f62ddb6ffc2f320511dbbfb50aa6a4b2ada6750da3ef30ab8\" returns successfully" Jan 29 11:16:10.805168 kubelet[1802]: I0129 11:16:10.804805 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dx4jg" podStartSLOduration=26.983854626 podStartE2EDuration="43.804783464s" podCreationTimestamp="2025-01-29 11:15:27 +0000 UTC" firstStartedPulling="2025-01-29 11:15:53.31465788 +0000 UTC m=+26.772994183" lastFinishedPulling="2025-01-29 11:16:10.135586732 +0000 UTC m=+43.593923021" observedRunningTime="2025-01-29 11:16:10.804699802 +0000 UTC m=+44.263036111" watchObservedRunningTime="2025-01-29 11:16:10.804783464 +0000 UTC m=+44.263119773" Jan 29 11:16:10.805168 kubelet[1802]: I0129 11:16:10.804948 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-hgrqg" podStartSLOduration=11.558208631 podStartE2EDuration="25.804944411s" podCreationTimestamp="2025-01-29 11:15:45 +0000 UTC" firstStartedPulling="2025-01-29 11:15:53.547100485 +0000 UTC m=+27.005436771" lastFinishedPulling="2025-01-29 11:16:07.793836248 +0000 UTC m=+41.252172551" observedRunningTime="2025-01-29 11:16:08.785047528 +0000 UTC m=+42.243383837" watchObservedRunningTime="2025-01-29 11:16:10.804944411 +0000 UTC m=+44.263280719" Jan 29 11:16:11.117585 kubelet[1802]: E0129 11:16:11.117497 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:11.276826 kubelet[1802]: I0129 11:16:11.276548 1802 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Jan 29 11:16:11.276826 kubelet[1802]: I0129 11:16:11.276583 1802 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Jan 29 11:16:12.117979 kubelet[1802]: E0129 11:16:12.117869 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:12.712531 kubelet[1802]: I0129 11:16:12.712429 1802 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 29 11:16:13.118837 kubelet[1802]: E0129 11:16:13.118761 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:14.119347 kubelet[1802]: E0129 11:16:14.119264 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:15.119608 kubelet[1802]: E0129 11:16:15.119484 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:16.120330 kubelet[1802]: E0129 11:16:16.120245 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:16.214627 systemd[1]: Created slice kubepods-besteffort-podfb1c5b5c_85e1_4efb_bdb3_2849f3441aaf.slice - libcontainer container kubepods-besteffort-podfb1c5b5c_85e1_4efb_bdb3_2849f3441aaf.slice. Jan 29 11:16:16.278155 kubelet[1802]: I0129 11:16:16.278086 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf-data\") pod \"nfs-server-provisioner-0\" (UID: \"fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf\") " pod="default/nfs-server-provisioner-0" Jan 29 11:16:16.278389 kubelet[1802]: I0129 11:16:16.278184 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hktlx\" (UniqueName: \"kubernetes.io/projected/fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf-kube-api-access-hktlx\") pod \"nfs-server-provisioner-0\" (UID: \"fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf\") " pod="default/nfs-server-provisioner-0" Jan 29 11:16:16.521873 containerd[1464]: time="2025-01-29T11:16:16.521280016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf,Namespace:default,Attempt:0,}" Jan 29 11:16:16.855179 systemd-networkd[1376]: cali60e51b789ff: Link UP Jan 29 11:16:16.856927 systemd-networkd[1376]: cali60e51b789ff: Gained carrier Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.706 [INFO][4292] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.197-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf 1463 0 2025-01-29 11:16:16 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 143.198.151.197 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.197-k8s-nfs--server--provisioner--0-" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.708 [INFO][4292] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.753 [INFO][4303] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" HandleID="k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Workload="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.769 [INFO][4303] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" HandleID="k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Workload="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002916d0), Attrs:map[string]string{"namespace":"default", "node":"143.198.151.197", "pod":"nfs-server-provisioner-0", "timestamp":"2025-01-29 11:16:16.753274369 +0000 UTC"}, Hostname:"143.198.151.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.770 [INFO][4303] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.770 [INFO][4303] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.770 [INFO][4303] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.197' Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.777 [INFO][4303] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.789 [INFO][4303] ipam/ipam.go 372: Looking up existing affinities for host host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.803 [INFO][4303] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.812 [INFO][4303] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.818 [INFO][4303] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.818 [INFO][4303] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.821 [INFO][4303] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239 Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.832 [INFO][4303] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.848 [INFO][4303] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.68/26] block=192.168.42.64/26 handle="k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.849 [INFO][4303] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.68/26] handle="k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" host="143.198.151.197" Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.849 [INFO][4303] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:16:16.883836 containerd[1464]: 2025-01-29 11:16:16.849 [INFO][4303] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.68/26] IPv6=[] ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" HandleID="k8s-pod-network.e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Workload="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:16:16.884644 containerd[1464]: 2025-01-29 11:16:16.851 [INFO][4292] cni-plugin/k8s.go 386: Populated endpoint ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf", ResourceVersion:"1463", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:16:16.884644 containerd[1464]: 2025-01-29 11:16:16.852 [INFO][4292] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.68/32] ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:16:16.884644 containerd[1464]: 2025-01-29 11:16:16.852 [INFO][4292] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:16:16.884644 containerd[1464]: 2025-01-29 11:16:16.856 [INFO][4292] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:16:16.884876 containerd[1464]: 2025-01-29 11:16:16.857 [INFO][4292] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf", ResourceVersion:"1463", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 16, 16, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.42.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"0e:21:23:1d:ac:7c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:16:16.884876 containerd[1464]: 2025-01-29 11:16:16.879 [INFO][4292] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.151.197-k8s-nfs--server--provisioner--0-eth0" Jan 29 11:16:16.945732 containerd[1464]: time="2025-01-29T11:16:16.945468507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:16:16.945732 containerd[1464]: time="2025-01-29T11:16:16.945626661Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:16:16.946144 containerd[1464]: time="2025-01-29T11:16:16.945762684Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:16.946144 containerd[1464]: time="2025-01-29T11:16:16.945953205Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:16.980047 systemd[1]: run-containerd-runc-k8s.io-e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239-runc.JocJ0d.mount: Deactivated successfully. Jan 29 11:16:16.988016 systemd[1]: Started cri-containerd-e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239.scope - libcontainer container e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239. Jan 29 11:16:17.042230 containerd[1464]: time="2025-01-29T11:16:17.042107205Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:fb1c5b5c-85e1-4efb-bdb3-2849f3441aaf,Namespace:default,Attempt:0,} returns sandbox id \"e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239\"" Jan 29 11:16:17.045547 containerd[1464]: time="2025-01-29T11:16:17.045132927Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Jan 29 11:16:17.121132 kubelet[1802]: E0129 11:16:17.120973 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:18.123944 kubelet[1802]: E0129 11:16:18.123841 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:18.497791 systemd-networkd[1376]: cali60e51b789ff: Gained IPv6LL Jan 29 11:16:19.124446 kubelet[1802]: E0129 11:16:19.124390 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:19.428393 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount454345497.mount: Deactivated successfully. Jan 29 11:16:20.125243 kubelet[1802]: E0129 11:16:20.124783 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:21.126108 kubelet[1802]: E0129 11:16:21.126065 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:21.841212 containerd[1464]: time="2025-01-29T11:16:21.841160557Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:21.842874 containerd[1464]: time="2025-01-29T11:16:21.842812969Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Jan 29 11:16:21.843879 containerd[1464]: time="2025-01-29T11:16:21.843846890Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:21.851729 containerd[1464]: time="2025-01-29T11:16:21.851660949Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:21.852599 containerd[1464]: time="2025-01-29T11:16:21.852555293Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.807380798s" Jan 29 11:16:21.853660 containerd[1464]: time="2025-01-29T11:16:21.853617920Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Jan 29 11:16:21.856484 containerd[1464]: time="2025-01-29T11:16:21.856454248Z" level=info msg="CreateContainer within sandbox \"e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Jan 29 11:16:21.878922 containerd[1464]: time="2025-01-29T11:16:21.878743570Z" level=info msg="CreateContainer within sandbox \"e9211dcd4eb536aa9024630041c1201ab6f5452657ca53f14fe8481b5934a239\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"ac45eeb1a56122ad66dd06e86e09028089fcb41893c371e09a47ca599afbf4a9\"" Jan 29 11:16:21.880253 containerd[1464]: time="2025-01-29T11:16:21.879822542Z" level=info msg="StartContainer for \"ac45eeb1a56122ad66dd06e86e09028089fcb41893c371e09a47ca599afbf4a9\"" Jan 29 11:16:21.929575 systemd[1]: Started cri-containerd-ac45eeb1a56122ad66dd06e86e09028089fcb41893c371e09a47ca599afbf4a9.scope - libcontainer container ac45eeb1a56122ad66dd06e86e09028089fcb41893c371e09a47ca599afbf4a9. Jan 29 11:16:21.990463 containerd[1464]: time="2025-01-29T11:16:21.989730563Z" level=info msg="StartContainer for \"ac45eeb1a56122ad66dd06e86e09028089fcb41893c371e09a47ca599afbf4a9\" returns successfully" Jan 29 11:16:22.127018 kubelet[1802]: E0129 11:16:22.126843 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:22.861631 kubelet[1802]: I0129 11:16:22.861482 1802 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.051221163 podStartE2EDuration="6.861464859s" podCreationTimestamp="2025-01-29 11:16:16 +0000 UTC" firstStartedPulling="2025-01-29 11:16:17.04466535 +0000 UTC m=+50.503001661" lastFinishedPulling="2025-01-29 11:16:21.854909057 +0000 UTC m=+55.313245357" observedRunningTime="2025-01-29 11:16:22.858294338 +0000 UTC m=+56.316630646" watchObservedRunningTime="2025-01-29 11:16:22.861464859 +0000 UTC m=+56.319801164" Jan 29 11:16:23.127542 kubelet[1802]: E0129 11:16:23.127371 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:24.128341 kubelet[1802]: E0129 11:16:24.128273 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:25.129160 kubelet[1802]: E0129 11:16:25.129080 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:26.129842 kubelet[1802]: E0129 11:16:26.129749 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:27.075974 kubelet[1802]: E0129 11:16:27.075879 1802 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:27.131553 kubelet[1802]: E0129 11:16:27.130880 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:27.138080 containerd[1464]: time="2025-01-29T11:16:27.137851063Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:16:27.138080 containerd[1464]: time="2025-01-29T11:16:27.137995830Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:16:27.138080 containerd[1464]: time="2025-01-29T11:16:27.138008130Z" level=info msg="StopPodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:16:27.143195 containerd[1464]: time="2025-01-29T11:16:27.143073647Z" level=info msg="RemovePodSandbox for \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:16:27.151475 containerd[1464]: time="2025-01-29T11:16:27.151412735Z" level=info msg="Forcibly stopping sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\"" Jan 29 11:16:27.151758 containerd[1464]: time="2025-01-29T11:16:27.151677142Z" level=info msg="TearDown network for sandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" successfully" Jan 29 11:16:27.161793 containerd[1464]: time="2025-01-29T11:16:27.161729919Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.162194 containerd[1464]: time="2025-01-29T11:16:27.162032536Z" level=info msg="RemovePodSandbox \"7a7869065ef017da3ba67d92cbf3b6687aba6cd84f0e74ead54d96570f46440c\" returns successfully" Jan 29 11:16:27.162971 containerd[1464]: time="2025-01-29T11:16:27.162908361Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:16:27.163174 containerd[1464]: time="2025-01-29T11:16:27.163067211Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:16:27.163174 containerd[1464]: time="2025-01-29T11:16:27.163086169Z" level=info msg="StopPodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:16:27.165358 containerd[1464]: time="2025-01-29T11:16:27.163732953Z" level=info msg="RemovePodSandbox for \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:16:27.165358 containerd[1464]: time="2025-01-29T11:16:27.163777201Z" level=info msg="Forcibly stopping sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\"" Jan 29 11:16:27.165358 containerd[1464]: time="2025-01-29T11:16:27.163897410Z" level=info msg="TearDown network for sandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" successfully" Jan 29 11:16:27.167871 containerd[1464]: time="2025-01-29T11:16:27.167826879Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.168153 containerd[1464]: time="2025-01-29T11:16:27.168131473Z" level=info msg="RemovePodSandbox \"021b32372830a8acef9731288e439f32f3a2446b86ea886ee3197c29b4574b6b\" returns successfully" Jan 29 11:16:27.169125 containerd[1464]: time="2025-01-29T11:16:27.169089327Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:16:27.169279 containerd[1464]: time="2025-01-29T11:16:27.169244705Z" level=info msg="TearDown network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" successfully" Jan 29 11:16:27.169279 containerd[1464]: time="2025-01-29T11:16:27.169259479Z" level=info msg="StopPodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" returns successfully" Jan 29 11:16:27.171270 containerd[1464]: time="2025-01-29T11:16:27.169765505Z" level=info msg="RemovePodSandbox for \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:16:27.171270 containerd[1464]: time="2025-01-29T11:16:27.169794624Z" level=info msg="Forcibly stopping sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\"" Jan 29 11:16:27.171270 containerd[1464]: time="2025-01-29T11:16:27.169872590Z" level=info msg="TearDown network for sandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" successfully" Jan 29 11:16:27.175550 containerd[1464]: time="2025-01-29T11:16:27.175367875Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.175550 containerd[1464]: time="2025-01-29T11:16:27.175443174Z" level=info msg="RemovePodSandbox \"c75ace98faa2013de60346c00859f04d782d70d98d51e0702c5b6c8189e6402a\" returns successfully" Jan 29 11:16:27.176419 containerd[1464]: time="2025-01-29T11:16:27.176339609Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" Jan 29 11:16:27.176575 containerd[1464]: time="2025-01-29T11:16:27.176494030Z" level=info msg="TearDown network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" successfully" Jan 29 11:16:27.176575 containerd[1464]: time="2025-01-29T11:16:27.176507535Z" level=info msg="StopPodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" returns successfully" Jan 29 11:16:27.176975 containerd[1464]: time="2025-01-29T11:16:27.176945741Z" level=info msg="RemovePodSandbox for \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" Jan 29 11:16:27.177072 containerd[1464]: time="2025-01-29T11:16:27.176978724Z" level=info msg="Forcibly stopping sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\"" Jan 29 11:16:27.177125 containerd[1464]: time="2025-01-29T11:16:27.177070989Z" level=info msg="TearDown network for sandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" successfully" Jan 29 11:16:27.180490 containerd[1464]: time="2025-01-29T11:16:27.180399815Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.180648 containerd[1464]: time="2025-01-29T11:16:27.180554992Z" level=info msg="RemovePodSandbox \"2c1334128a0d27d07bf18056a7b43695cc28de03a2ad9d841b11b977ef628510\" returns successfully" Jan 29 11:16:27.181592 containerd[1464]: time="2025-01-29T11:16:27.181325633Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\"" Jan 29 11:16:27.181592 containerd[1464]: time="2025-01-29T11:16:27.181475819Z" level=info msg="TearDown network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" successfully" Jan 29 11:16:27.181592 containerd[1464]: time="2025-01-29T11:16:27.181496155Z" level=info msg="StopPodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" returns successfully" Jan 29 11:16:27.182482 containerd[1464]: time="2025-01-29T11:16:27.182203182Z" level=info msg="RemovePodSandbox for \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\"" Jan 29 11:16:27.182482 containerd[1464]: time="2025-01-29T11:16:27.182241935Z" level=info msg="Forcibly stopping sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\"" Jan 29 11:16:27.182482 containerd[1464]: time="2025-01-29T11:16:27.182349544Z" level=info msg="TearDown network for sandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" successfully" Jan 29 11:16:27.186799 containerd[1464]: time="2025-01-29T11:16:27.186574623Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.187273 containerd[1464]: time="2025-01-29T11:16:27.187119766Z" level=info msg="RemovePodSandbox \"2f6cbe49924bea5d1c7307438c648140c766d9f75d565f4c776302d4612397e4\" returns successfully" Jan 29 11:16:27.187806 containerd[1464]: time="2025-01-29T11:16:27.187763505Z" level=info msg="StopPodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\"" Jan 29 11:16:27.187939 containerd[1464]: time="2025-01-29T11:16:27.187909320Z" level=info msg="TearDown network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" successfully" Jan 29 11:16:27.187939 containerd[1464]: time="2025-01-29T11:16:27.187931937Z" level=info msg="StopPodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" returns successfully" Jan 29 11:16:27.188791 containerd[1464]: time="2025-01-29T11:16:27.188752431Z" level=info msg="RemovePodSandbox for \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\"" Jan 29 11:16:27.188791 containerd[1464]: time="2025-01-29T11:16:27.188790593Z" level=info msg="Forcibly stopping sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\"" Jan 29 11:16:27.188933 containerd[1464]: time="2025-01-29T11:16:27.188884285Z" level=info msg="TearDown network for sandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" successfully" Jan 29 11:16:27.192156 containerd[1464]: time="2025-01-29T11:16:27.192103595Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.192320 containerd[1464]: time="2025-01-29T11:16:27.192182399Z" level=info msg="RemovePodSandbox \"9ac6324a6d926a0b4b2520382f0bd13e24771692ee25cc86f1447b278916565e\" returns successfully" Jan 29 11:16:27.193085 containerd[1464]: time="2025-01-29T11:16:27.193035757Z" level=info msg="StopPodSandbox for \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\"" Jan 29 11:16:27.193203 containerd[1464]: time="2025-01-29T11:16:27.193187386Z" level=info msg="TearDown network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\" successfully" Jan 29 11:16:27.193255 containerd[1464]: time="2025-01-29T11:16:27.193204205Z" level=info msg="StopPodSandbox for \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\" returns successfully" Jan 29 11:16:27.193802 containerd[1464]: time="2025-01-29T11:16:27.193761140Z" level=info msg="RemovePodSandbox for \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\"" Jan 29 11:16:27.193802 containerd[1464]: time="2025-01-29T11:16:27.193798337Z" level=info msg="Forcibly stopping sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\"" Jan 29 11:16:27.193963 containerd[1464]: time="2025-01-29T11:16:27.193888386Z" level=info msg="TearDown network for sandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\" successfully" Jan 29 11:16:27.198096 containerd[1464]: time="2025-01-29T11:16:27.198022204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.198287 containerd[1464]: time="2025-01-29T11:16:27.198115260Z" level=info msg="RemovePodSandbox \"8dcd1f169270dbe7baa4197057251cacc78bbcaccb42a480aae8ccadd268b7ef\" returns successfully" Jan 29 11:16:27.199301 containerd[1464]: time="2025-01-29T11:16:27.198996674Z" level=info msg="StopPodSandbox for \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\"" Jan 29 11:16:27.199301 containerd[1464]: time="2025-01-29T11:16:27.199146715Z" level=info msg="TearDown network for sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\" successfully" Jan 29 11:16:27.199301 containerd[1464]: time="2025-01-29T11:16:27.199168856Z" level=info msg="StopPodSandbox for \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\" returns successfully" Jan 29 11:16:27.200162 containerd[1464]: time="2025-01-29T11:16:27.199819971Z" level=info msg="RemovePodSandbox for \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\"" Jan 29 11:16:27.200162 containerd[1464]: time="2025-01-29T11:16:27.199853728Z" level=info msg="Forcibly stopping sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\"" Jan 29 11:16:27.200162 containerd[1464]: time="2025-01-29T11:16:27.199951167Z" level=info msg="TearDown network for sandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\" successfully" Jan 29 11:16:27.204358 containerd[1464]: time="2025-01-29T11:16:27.204276041Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.204358 containerd[1464]: time="2025-01-29T11:16:27.204364032Z" level=info msg="RemovePodSandbox \"4c36c44beb11662fda744c07a063bfa58d15a876a89663cab410190e8e7f300a\" returns successfully" Jan 29 11:16:27.205073 containerd[1464]: time="2025-01-29T11:16:27.205006255Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:16:27.205210 containerd[1464]: time="2025-01-29T11:16:27.205145707Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:16:27.205210 containerd[1464]: time="2025-01-29T11:16:27.205192514Z" level=info msg="StopPodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:16:27.207256 containerd[1464]: time="2025-01-29T11:16:27.205643139Z" level=info msg="RemovePodSandbox for \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:16:27.207256 containerd[1464]: time="2025-01-29T11:16:27.205667127Z" level=info msg="Forcibly stopping sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\"" Jan 29 11:16:27.207256 containerd[1464]: time="2025-01-29T11:16:27.205798616Z" level=info msg="TearDown network for sandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" successfully" Jan 29 11:16:27.213257 containerd[1464]: time="2025-01-29T11:16:27.212687146Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.213257 containerd[1464]: time="2025-01-29T11:16:27.212803102Z" level=info msg="RemovePodSandbox \"1bd38506e406807638a6e4800c9f1c66a764bba3c9ae2fd51e672243ad5f74dd\" returns successfully" Jan 29 11:16:27.213509 containerd[1464]: time="2025-01-29T11:16:27.213376843Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:16:27.213992 containerd[1464]: time="2025-01-29T11:16:27.213552635Z" level=info msg="TearDown network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" successfully" Jan 29 11:16:27.213992 containerd[1464]: time="2025-01-29T11:16:27.213577617Z" level=info msg="StopPodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" returns successfully" Jan 29 11:16:27.214168 containerd[1464]: time="2025-01-29T11:16:27.214139232Z" level=info msg="RemovePodSandbox for \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:16:27.214268 containerd[1464]: time="2025-01-29T11:16:27.214174450Z" level=info msg="Forcibly stopping sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\"" Jan 29 11:16:27.214347 containerd[1464]: time="2025-01-29T11:16:27.214287902Z" level=info msg="TearDown network for sandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" successfully" Jan 29 11:16:27.220621 containerd[1464]: time="2025-01-29T11:16:27.220557280Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.221784 containerd[1464]: time="2025-01-29T11:16:27.220650497Z" level=info msg="RemovePodSandbox \"2b7347c504ff34aee22bb696c3e04408c3950a75ae649d5c113683ae0058296a\" returns successfully" Jan 29 11:16:27.222375 containerd[1464]: time="2025-01-29T11:16:27.222335434Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" Jan 29 11:16:27.223054 containerd[1464]: time="2025-01-29T11:16:27.222915350Z" level=info msg="TearDown network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" successfully" Jan 29 11:16:27.223054 containerd[1464]: time="2025-01-29T11:16:27.222941514Z" level=info msg="StopPodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" returns successfully" Jan 29 11:16:27.223689 containerd[1464]: time="2025-01-29T11:16:27.223664479Z" level=info msg="RemovePodSandbox for \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" Jan 29 11:16:27.224943 containerd[1464]: time="2025-01-29T11:16:27.224115332Z" level=info msg="Forcibly stopping sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\"" Jan 29 11:16:27.224943 containerd[1464]: time="2025-01-29T11:16:27.224209439Z" level=info msg="TearDown network for sandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" successfully" Jan 29 11:16:27.228188 containerd[1464]: time="2025-01-29T11:16:27.228006927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.228468 containerd[1464]: time="2025-01-29T11:16:27.228437030Z" level=info msg="RemovePodSandbox \"31a62b309d975cd861719c6af5efffd74a754679330349f57644795939a0f1be\" returns successfully" Jan 29 11:16:27.230572 containerd[1464]: time="2025-01-29T11:16:27.230521161Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\"" Jan 29 11:16:27.231357 containerd[1464]: time="2025-01-29T11:16:27.230941819Z" level=info msg="TearDown network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" successfully" Jan 29 11:16:27.231357 containerd[1464]: time="2025-01-29T11:16:27.230968635Z" level=info msg="StopPodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" returns successfully" Jan 29 11:16:27.232254 containerd[1464]: time="2025-01-29T11:16:27.231890320Z" level=info msg="RemovePodSandbox for \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\"" Jan 29 11:16:27.232254 containerd[1464]: time="2025-01-29T11:16:27.231934355Z" level=info msg="Forcibly stopping sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\"" Jan 29 11:16:27.232254 containerd[1464]: time="2025-01-29T11:16:27.232057191Z" level=info msg="TearDown network for sandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" successfully" Jan 29 11:16:27.237562 containerd[1464]: time="2025-01-29T11:16:27.237196153Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.237983 containerd[1464]: time="2025-01-29T11:16:27.237923776Z" level=info msg="RemovePodSandbox \"0709f80bc5966761aedb0dc61b586715c95725377df7dc711e012896d144d93b\" returns successfully" Jan 29 11:16:27.239878 containerd[1464]: time="2025-01-29T11:16:27.239601337Z" level=info msg="StopPodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\"" Jan 29 11:16:27.239878 containerd[1464]: time="2025-01-29T11:16:27.239751827Z" level=info msg="TearDown network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" successfully" Jan 29 11:16:27.239878 containerd[1464]: time="2025-01-29T11:16:27.239764456Z" level=info msg="StopPodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" returns successfully" Jan 29 11:16:27.240655 containerd[1464]: time="2025-01-29T11:16:27.240417110Z" level=info msg="RemovePodSandbox for \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\"" Jan 29 11:16:27.240655 containerd[1464]: time="2025-01-29T11:16:27.240448613Z" level=info msg="Forcibly stopping sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\"" Jan 29 11:16:27.240655 containerd[1464]: time="2025-01-29T11:16:27.240552689Z" level=info msg="TearDown network for sandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" successfully" Jan 29 11:16:27.245068 containerd[1464]: time="2025-01-29T11:16:27.244862204Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.245068 containerd[1464]: time="2025-01-29T11:16:27.244971953Z" level=info msg="RemovePodSandbox \"8f1b0b3a61da9da16eef82e4162e4a150070e879ab8337db7e4885de85ef7c25\" returns successfully" Jan 29 11:16:27.246393 containerd[1464]: time="2025-01-29T11:16:27.246192890Z" level=info msg="StopPodSandbox for \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\"" Jan 29 11:16:27.246393 containerd[1464]: time="2025-01-29T11:16:27.246310084Z" level=info msg="TearDown network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\" successfully" Jan 29 11:16:27.246393 containerd[1464]: time="2025-01-29T11:16:27.246320978Z" level=info msg="StopPodSandbox for \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\" returns successfully" Jan 29 11:16:27.247189 containerd[1464]: time="2025-01-29T11:16:27.247074158Z" level=info msg="RemovePodSandbox for \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\"" Jan 29 11:16:27.247189 containerd[1464]: time="2025-01-29T11:16:27.247128740Z" level=info msg="Forcibly stopping sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\"" Jan 29 11:16:27.247655 containerd[1464]: time="2025-01-29T11:16:27.247383684Z" level=info msg="TearDown network for sandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\" successfully" Jan 29 11:16:27.251681 containerd[1464]: time="2025-01-29T11:16:27.251506585Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.251681 containerd[1464]: time="2025-01-29T11:16:27.251606743Z" level=info msg="RemovePodSandbox \"71f64eb10a7b257869d96577c945d3493c8d2f5f223e7f1adc9e9dda65f3d004\" returns successfully" Jan 29 11:16:27.252799 containerd[1464]: time="2025-01-29T11:16:27.252517697Z" level=info msg="StopPodSandbox for \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\"" Jan 29 11:16:27.252799 containerd[1464]: time="2025-01-29T11:16:27.252639217Z" level=info msg="TearDown network for sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\" successfully" Jan 29 11:16:27.252799 containerd[1464]: time="2025-01-29T11:16:27.252693346Z" level=info msg="StopPodSandbox for \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\" returns successfully" Jan 29 11:16:27.253377 containerd[1464]: time="2025-01-29T11:16:27.253358727Z" level=info msg="RemovePodSandbox for \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\"" Jan 29 11:16:27.253841 containerd[1464]: time="2025-01-29T11:16:27.253506564Z" level=info msg="Forcibly stopping sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\"" Jan 29 11:16:27.253841 containerd[1464]: time="2025-01-29T11:16:27.253618484Z" level=info msg="TearDown network for sandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\" successfully" Jan 29 11:16:27.259821 containerd[1464]: time="2025-01-29T11:16:27.259636675Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.259821 containerd[1464]: time="2025-01-29T11:16:27.259780813Z" level=info msg="RemovePodSandbox \"3cc97b2aa257a29f8f434726f4d077e1d32bb1bb205e7ba773b3ba7bd1f3c9f1\" returns successfully" Jan 29 11:16:27.262458 containerd[1464]: time="2025-01-29T11:16:27.261787319Z" level=info msg="StopPodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\"" Jan 29 11:16:27.262458 containerd[1464]: time="2025-01-29T11:16:27.262351578Z" level=info msg="TearDown network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" successfully" Jan 29 11:16:27.262458 containerd[1464]: time="2025-01-29T11:16:27.262375964Z" level=info msg="StopPodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" returns successfully" Jan 29 11:16:27.271803 containerd[1464]: time="2025-01-29T11:16:27.271288617Z" level=info msg="RemovePodSandbox for \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\"" Jan 29 11:16:27.271803 containerd[1464]: time="2025-01-29T11:16:27.271330391Z" level=info msg="Forcibly stopping sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\"" Jan 29 11:16:27.271803 containerd[1464]: time="2025-01-29T11:16:27.271438260Z" level=info msg="TearDown network for sandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" successfully" Jan 29 11:16:27.276338 containerd[1464]: time="2025-01-29T11:16:27.276169508Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.276338 containerd[1464]: time="2025-01-29T11:16:27.276287360Z" level=info msg="RemovePodSandbox \"3708a282a0232a58354b9287a4b6bfb4e7aa79b277857ad6b90c8bea3f8114a7\" returns successfully" Jan 29 11:16:27.277789 containerd[1464]: time="2025-01-29T11:16:27.277142278Z" level=info msg="StopPodSandbox for \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\"" Jan 29 11:16:27.277789 containerd[1464]: time="2025-01-29T11:16:27.277282811Z" level=info msg="TearDown network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\" successfully" Jan 29 11:16:27.277789 containerd[1464]: time="2025-01-29T11:16:27.277298717Z" level=info msg="StopPodSandbox for \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\" returns successfully" Jan 29 11:16:27.277789 containerd[1464]: time="2025-01-29T11:16:27.277688275Z" level=info msg="RemovePodSandbox for \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\"" Jan 29 11:16:27.278390 containerd[1464]: time="2025-01-29T11:16:27.278150993Z" level=info msg="Forcibly stopping sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\"" Jan 29 11:16:27.278390 containerd[1464]: time="2025-01-29T11:16:27.278320994Z" level=info msg="TearDown network for sandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\" successfully" Jan 29 11:16:27.288792 containerd[1464]: time="2025-01-29T11:16:27.287882692Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.290763 containerd[1464]: time="2025-01-29T11:16:27.289108288Z" level=info msg="RemovePodSandbox \"6430a46e1a2783b0e78023458134250679bf710c8048344508227c8042d860fe\" returns successfully" Jan 29 11:16:27.292791 containerd[1464]: time="2025-01-29T11:16:27.292741154Z" level=info msg="StopPodSandbox for \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\"" Jan 29 11:16:27.292977 containerd[1464]: time="2025-01-29T11:16:27.292943748Z" level=info msg="TearDown network for sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\" successfully" Jan 29 11:16:27.293067 containerd[1464]: time="2025-01-29T11:16:27.292974898Z" level=info msg="StopPodSandbox for \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\" returns successfully" Jan 29 11:16:27.303446 containerd[1464]: time="2025-01-29T11:16:27.303379006Z" level=info msg="RemovePodSandbox for \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\"" Jan 29 11:16:27.303446 containerd[1464]: time="2025-01-29T11:16:27.303448452Z" level=info msg="Forcibly stopping sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\"" Jan 29 11:16:27.303674 containerd[1464]: time="2025-01-29T11:16:27.303584728Z" level=info msg="TearDown network for sandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\" successfully" Jan 29 11:16:27.307326 containerd[1464]: time="2025-01-29T11:16:27.307267955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.307900 containerd[1464]: time="2025-01-29T11:16:27.307349532Z" level=info msg="RemovePodSandbox \"d0e32342c90f7ff6ebb24da9d9acb23fb07faf1cac09ad99713dfceff4239283\" returns successfully" Jan 29 11:16:27.308227 containerd[1464]: time="2025-01-29T11:16:27.308156076Z" level=info msg="StopPodSandbox for \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\"" Jan 29 11:16:27.308652 containerd[1464]: time="2025-01-29T11:16:27.308587337Z" level=info msg="TearDown network for sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" successfully" Jan 29 11:16:27.308751 containerd[1464]: time="2025-01-29T11:16:27.308737976Z" level=info msg="StopPodSandbox for \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" returns successfully" Jan 29 11:16:27.309295 containerd[1464]: time="2025-01-29T11:16:27.309272062Z" level=info msg="RemovePodSandbox for \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\"" Jan 29 11:16:27.309714 containerd[1464]: time="2025-01-29T11:16:27.309504837Z" level=info msg="Forcibly stopping sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\"" Jan 29 11:16:27.309714 containerd[1464]: time="2025-01-29T11:16:27.309609600Z" level=info msg="TearDown network for sandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" successfully" Jan 29 11:16:27.314219 containerd[1464]: time="2025-01-29T11:16:27.314160610Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 29 11:16:27.314830 containerd[1464]: time="2025-01-29T11:16:27.314470353Z" level=info msg="RemovePodSandbox \"4cc322cd7270c80953a1acf59e6052831f87eaffea7477eb67cea5bc2b8c9a71\" returns successfully" Jan 29 11:16:28.133140 kubelet[1802]: E0129 11:16:28.133075 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:28.532554 systemd[1]: run-containerd-runc-k8s.io-ffc78e4a62b0b1ca6e10122f442d0eb9d0e267719dbc4af2c4e78390d27e35d4-runc.ll3aj3.mount: Deactivated successfully. Jan 29 11:16:29.133279 kubelet[1802]: E0129 11:16:29.133199 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:30.134304 kubelet[1802]: E0129 11:16:30.134234 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:31.135252 kubelet[1802]: E0129 11:16:31.135116 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:32.136209 kubelet[1802]: E0129 11:16:32.136122 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:33.137141 kubelet[1802]: E0129 11:16:33.137080 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:34.138547 kubelet[1802]: E0129 11:16:34.138420 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:35.139210 kubelet[1802]: E0129 11:16:35.139129 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:36.140444 kubelet[1802]: E0129 11:16:36.140359 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:37.141473 kubelet[1802]: E0129 11:16:37.141339 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:38.141967 kubelet[1802]: E0129 11:16:38.141901 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:39.142912 kubelet[1802]: E0129 11:16:39.142746 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:40.143596 kubelet[1802]: E0129 11:16:40.143386 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:41.144437 kubelet[1802]: E0129 11:16:41.144357 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:42.145285 kubelet[1802]: E0129 11:16:42.145202 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:43.145530 kubelet[1802]: E0129 11:16:43.145424 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:44.146303 kubelet[1802]: E0129 11:16:44.146228 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:45.147349 kubelet[1802]: E0129 11:16:45.147274 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:46.147518 kubelet[1802]: E0129 11:16:46.147441 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:47.076654 kubelet[1802]: E0129 11:16:47.076592 1802 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:47.147653 kubelet[1802]: E0129 11:16:47.147595 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:47.209588 systemd[1]: Created slice kubepods-besteffort-pod3c44e4be_db90_4632_bcbd_b473b89dcd52.slice - libcontainer container kubepods-besteffort-pod3c44e4be_db90_4632_bcbd_b473b89dcd52.slice. Jan 29 11:16:47.307604 kubelet[1802]: I0129 11:16:47.307419 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-17a26f4d-ee5a-42a4-9c02-0c3725f071fc\" (UniqueName: \"kubernetes.io/nfs/3c44e4be-db90-4632-bcbd-b473b89dcd52-pvc-17a26f4d-ee5a-42a4-9c02-0c3725f071fc\") pod \"test-pod-1\" (UID: \"3c44e4be-db90-4632-bcbd-b473b89dcd52\") " pod="default/test-pod-1" Jan 29 11:16:47.307604 kubelet[1802]: I0129 11:16:47.307510 1802 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s42x7\" (UniqueName: \"kubernetes.io/projected/3c44e4be-db90-4632-bcbd-b473b89dcd52-kube-api-access-s42x7\") pod \"test-pod-1\" (UID: \"3c44e4be-db90-4632-bcbd-b473b89dcd52\") " pod="default/test-pod-1" Jan 29 11:16:47.455476 kernel: FS-Cache: Loaded Jan 29 11:16:47.534102 kernel: RPC: Registered named UNIX socket transport module. Jan 29 11:16:47.534276 kernel: RPC: Registered udp transport module. Jan 29 11:16:47.534302 kernel: RPC: Registered tcp transport module. Jan 29 11:16:47.534321 kernel: RPC: Registered tcp-with-tls transport module. Jan 29 11:16:47.534803 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Jan 29 11:16:47.776781 kernel: NFS: Registering the id_resolver key type Jan 29 11:16:47.778853 kernel: Key type id_resolver registered Jan 29 11:16:47.779014 kernel: Key type id_legacy registered Jan 29 11:16:47.819278 nfsidmap[4548]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-b-c9bf0051f1' Jan 29 11:16:47.824889 nfsidmap[4549]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '1.0-b-c9bf0051f1' Jan 29 11:16:48.114900 containerd[1464]: time="2025-01-29T11:16:48.114540244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3c44e4be-db90-4632-bcbd-b473b89dcd52,Namespace:default,Attempt:0,}" Jan 29 11:16:48.148205 kubelet[1802]: E0129 11:16:48.148129 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:48.406479 systemd-networkd[1376]: cali5ec59c6bf6e: Link UP Jan 29 11:16:48.408653 systemd-networkd[1376]: cali5ec59c6bf6e: Gained carrier Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.220 [INFO][4551] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.151.197-k8s-test--pod--1-eth0 default 3c44e4be-db90-4632-bcbd-b473b89dcd52 1564 0 2025-01-29 11:16:17 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.198.151.197 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.197-k8s-test--pod--1-" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.221 [INFO][4551] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.197-k8s-test--pod--1-eth0" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.335 [INFO][4561] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" HandleID="k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Workload="143.198.151.197-k8s-test--pod--1-eth0" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.351 [INFO][4561] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" HandleID="k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Workload="143.198.151.197-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334760), Attrs:map[string]string{"namespace":"default", "node":"143.198.151.197", "pod":"test-pod-1", "timestamp":"2025-01-29 11:16:48.335640971 +0000 UTC"}, Hostname:"143.198.151.197", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.352 [INFO][4561] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.352 [INFO][4561] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.352 [INFO][4561] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.151.197' Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.356 [INFO][4561] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.364 [INFO][4561] ipam/ipam.go 372: Looking up existing affinities for host host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.370 [INFO][4561] ipam/ipam.go 489: Trying affinity for 192.168.42.64/26 host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.374 [INFO][4561] ipam/ipam.go 155: Attempting to load block cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.378 [INFO][4561] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.42.64/26 host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.378 [INFO][4561] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.42.64/26 handle="k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.381 [INFO][4561] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.387 [INFO][4561] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.42.64/26 handle="k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.400 [INFO][4561] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.42.69/26] block=192.168.42.64/26 handle="k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.400 [INFO][4561] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.42.69/26] handle="k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" host="143.198.151.197" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.400 [INFO][4561] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.400 [INFO][4561] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.42.69/26] IPv6=[] ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" HandleID="k8s-pod-network.081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Workload="143.198.151.197-k8s-test--pod--1-eth0" Jan 29 11:16:48.428174 containerd[1464]: 2025-01-29 11:16:48.401 [INFO][4551] cni-plugin/k8s.go 386: Populated endpoint ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.197-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3c44e4be-db90-4632-bcbd-b473b89dcd52", ResourceVersion:"1564", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:16:48.429469 containerd[1464]: 2025-01-29 11:16:48.402 [INFO][4551] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.42.69/32] ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.197-k8s-test--pod--1-eth0" Jan 29 11:16:48.429469 containerd[1464]: 2025-01-29 11:16:48.402 [INFO][4551] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.197-k8s-test--pod--1-eth0" Jan 29 11:16:48.429469 containerd[1464]: 2025-01-29 11:16:48.406 [INFO][4551] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.197-k8s-test--pod--1-eth0" Jan 29 11:16:48.429469 containerd[1464]: 2025-01-29 11:16:48.407 [INFO][4551] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.197-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.151.197-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"3c44e4be-db90-4632-bcbd-b473b89dcd52", ResourceVersion:"1564", Generation:0, CreationTimestamp:time.Date(2025, time.January, 29, 11, 16, 17, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.151.197", ContainerID:"081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.42.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"e2:a0:33:5f:55:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Jan 29 11:16:48.429469 containerd[1464]: 2025-01-29 11:16:48.425 [INFO][4551] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.151.197-k8s-test--pod--1-eth0" Jan 29 11:16:48.474315 containerd[1464]: time="2025-01-29T11:16:48.474151878Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 29 11:16:48.474853 containerd[1464]: time="2025-01-29T11:16:48.474623210Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 29 11:16:48.475298 containerd[1464]: time="2025-01-29T11:16:48.475253453Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:48.475993 containerd[1464]: time="2025-01-29T11:16:48.475937822Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 29 11:16:48.507202 systemd[1]: run-containerd-runc-k8s.io-081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a-runc.Yy4t0e.mount: Deactivated successfully. Jan 29 11:16:48.526028 systemd[1]: Started cri-containerd-081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a.scope - libcontainer container 081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a. Jan 29 11:16:48.585848 containerd[1464]: time="2025-01-29T11:16:48.585786152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:3c44e4be-db90-4632-bcbd-b473b89dcd52,Namespace:default,Attempt:0,} returns sandbox id \"081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a\"" Jan 29 11:16:48.588453 containerd[1464]: time="2025-01-29T11:16:48.588421214Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Jan 29 11:16:49.149075 kubelet[1802]: E0129 11:16:49.149011 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:49.257350 containerd[1464]: time="2025-01-29T11:16:49.252895815Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Jan 29 11:16:49.257350 containerd[1464]: time="2025-01-29T11:16:49.256139242Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:2ffeb5a7ca68f2017f0bc48251750a6e40fcd3c341b94a22fc7812dcabbb84db\", size \"71015439\" in 667.45824ms" Jan 29 11:16:49.257350 containerd[1464]: time="2025-01-29T11:16:49.256177037Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:0dcfd986e814f68db775fba6b61fbaec3761562dc2ae3043d38dbff123e1bb1e\"" Jan 29 11:16:49.259360 containerd[1464]: time="2025-01-29T11:16:49.259312810Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 29 11:16:49.260397 containerd[1464]: time="2025-01-29T11:16:49.260349138Z" level=info msg="CreateContainer within sandbox \"081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a\" for container &ContainerMetadata{Name:test,Attempt:0,}" Jan 29 11:16:49.287850 containerd[1464]: time="2025-01-29T11:16:49.287791742Z" level=info msg="CreateContainer within sandbox \"081b79b29b6499d627fcc439e90f26125db05fe6f787e6623d08879f8a13732a\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"0c5c4a13f64256ea31774d82d74ad4da178a5905d32ea576397260e49d356183\"" Jan 29 11:16:49.289119 containerd[1464]: time="2025-01-29T11:16:49.289053704Z" level=info msg="StartContainer for \"0c5c4a13f64256ea31774d82d74ad4da178a5905d32ea576397260e49d356183\"" Jan 29 11:16:49.327007 systemd[1]: Started cri-containerd-0c5c4a13f64256ea31774d82d74ad4da178a5905d32ea576397260e49d356183.scope - libcontainer container 0c5c4a13f64256ea31774d82d74ad4da178a5905d32ea576397260e49d356183. Jan 29 11:16:49.369878 containerd[1464]: time="2025-01-29T11:16:49.369826145Z" level=info msg="StartContainer for \"0c5c4a13f64256ea31774d82d74ad4da178a5905d32ea576397260e49d356183\" returns successfully" Jan 29 11:16:50.049469 systemd-networkd[1376]: cali5ec59c6bf6e: Gained IPv6LL Jan 29 11:16:50.149464 kubelet[1802]: E0129 11:16:50.149348 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:51.150293 kubelet[1802]: E0129 11:16:51.150175 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:52.151315 kubelet[1802]: E0129 11:16:52.151247 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:53.151999 kubelet[1802]: E0129 11:16:53.151921 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:54.152611 kubelet[1802]: E0129 11:16:54.152539 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:55.153361 kubelet[1802]: E0129 11:16:55.153300 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:55.255088 kubelet[1802]: E0129 11:16:55.254645 1802 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Jan 29 11:16:56.154419 kubelet[1802]: E0129 11:16:56.154352 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Jan 29 11:16:57.155285 kubelet[1802]: E0129 11:16:57.155206 1802 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"