Feb 13 20:15:08.009346 kernel: Linux version 6.6.74-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Thu Feb 13 18:03:41 -00 2025 Feb 13 20:15:08.009400 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:08.009419 kernel: BIOS-provided physical RAM map: Feb 13 20:15:08.009430 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Feb 13 20:15:08.009436 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Feb 13 20:15:08.009443 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Feb 13 20:15:08.009450 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Feb 13 20:15:08.009457 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Feb 13 20:15:08.009464 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Feb 13 20:15:08.009478 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Feb 13 20:15:08.009489 kernel: NX (Execute Disable) protection: active Feb 13 20:15:08.009500 kernel: APIC: Static calls initialized Feb 13 20:15:08.009513 kernel: SMBIOS 2.8 present. Feb 13 20:15:08.009520 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Feb 13 20:15:08.009529 kernel: Hypervisor detected: KVM Feb 13 20:15:08.009540 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Feb 13 20:15:08.009551 kernel: kvm-clock: using sched offset of 4063121788 cycles Feb 13 20:15:08.009560 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Feb 13 20:15:08.009569 kernel: tsc: Detected 2494.138 MHz processor Feb 13 20:15:08.009577 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 13 20:15:08.009585 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 13 20:15:08.009593 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Feb 13 20:15:08.009601 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Feb 13 20:15:08.009609 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 13 20:15:08.009620 kernel: ACPI: Early table checksum verification disabled Feb 13 20:15:08.009628 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Feb 13 20:15:08.009636 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:08.009674 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:08.009681 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:08.009689 kernel: ACPI: FACS 0x000000007FFE0000 000040 Feb 13 20:15:08.009697 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:08.009704 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:08.009712 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:08.009724 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Feb 13 20:15:08.009731 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Feb 13 20:15:08.009742 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Feb 13 20:15:08.009755 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Feb 13 20:15:08.009767 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Feb 13 20:15:08.009777 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Feb 13 20:15:08.009788 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Feb 13 20:15:08.009818 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Feb 13 20:15:08.009829 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 13 20:15:08.009841 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 13 20:15:08.009852 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Feb 13 20:15:08.009865 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Feb 13 20:15:08.009880 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Feb 13 20:15:08.009892 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Feb 13 20:15:08.009911 kernel: Zone ranges: Feb 13 20:15:08.009924 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 13 20:15:08.009935 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Feb 13 20:15:08.009944 kernel: Normal empty Feb 13 20:15:08.009952 kernel: Movable zone start for each node Feb 13 20:15:08.009960 kernel: Early memory node ranges Feb 13 20:15:08.009969 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Feb 13 20:15:08.009978 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Feb 13 20:15:08.009986 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Feb 13 20:15:08.010000 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 13 20:15:08.010009 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Feb 13 20:15:08.010019 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Feb 13 20:15:08.010027 kernel: ACPI: PM-Timer IO Port: 0x608 Feb 13 20:15:08.010035 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Feb 13 20:15:08.010043 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Feb 13 20:15:08.010052 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Feb 13 20:15:08.010060 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Feb 13 20:15:08.010068 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 13 20:15:08.010082 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Feb 13 20:15:08.010090 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Feb 13 20:15:08.010098 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 13 20:15:08.010106 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Feb 13 20:15:08.010115 kernel: TSC deadline timer available Feb 13 20:15:08.010123 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 13 20:15:08.010131 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Feb 13 20:15:08.010139 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Feb 13 20:15:08.010150 kernel: Booting paravirtualized kernel on KVM Feb 13 20:15:08.010158 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 13 20:15:08.010172 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Feb 13 20:15:08.010180 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Feb 13 20:15:08.010189 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Feb 13 20:15:08.010197 kernel: pcpu-alloc: [0] 0 1 Feb 13 20:15:08.010207 kernel: kvm-guest: PV spinlocks disabled, no host support Feb 13 20:15:08.010227 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:08.010241 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 13 20:15:08.010251 kernel: random: crng init done Feb 13 20:15:08.010271 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Feb 13 20:15:08.010281 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 13 20:15:08.010292 kernel: Fallback order for Node 0: 0 Feb 13 20:15:08.010305 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Feb 13 20:15:08.010313 kernel: Policy zone: DMA32 Feb 13 20:15:08.010322 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 13 20:15:08.010330 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2301K rwdata, 22728K rodata, 42840K init, 2352K bss, 125148K reserved, 0K cma-reserved) Feb 13 20:15:08.010339 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 13 20:15:08.010353 kernel: Kernel/User page tables isolation: enabled Feb 13 20:15:08.010362 kernel: ftrace: allocating 37921 entries in 149 pages Feb 13 20:15:08.010370 kernel: ftrace: allocated 149 pages with 4 groups Feb 13 20:15:08.010378 kernel: Dynamic Preempt: voluntary Feb 13 20:15:08.010386 kernel: rcu: Preemptible hierarchical RCU implementation. Feb 13 20:15:08.010396 kernel: rcu: RCU event tracing is enabled. Feb 13 20:15:08.010405 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 13 20:15:08.010414 kernel: Trampoline variant of Tasks RCU enabled. Feb 13 20:15:08.010422 kernel: Rude variant of Tasks RCU enabled. Feb 13 20:15:08.010430 kernel: Tracing variant of Tasks RCU enabled. Feb 13 20:15:08.010445 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 13 20:15:08.010453 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 13 20:15:08.010461 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Feb 13 20:15:08.010469 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Feb 13 20:15:08.010480 kernel: Console: colour VGA+ 80x25 Feb 13 20:15:08.010488 kernel: printk: console [tty0] enabled Feb 13 20:15:08.010497 kernel: printk: console [ttyS0] enabled Feb 13 20:15:08.010505 kernel: ACPI: Core revision 20230628 Feb 13 20:15:08.010513 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Feb 13 20:15:08.010527 kernel: APIC: Switch to symmetric I/O mode setup Feb 13 20:15:08.010535 kernel: x2apic enabled Feb 13 20:15:08.010543 kernel: APIC: Switched APIC routing to: physical x2apic Feb 13 20:15:08.010552 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Feb 13 20:15:08.010561 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 13 20:15:08.010586 kernel: Calibrating delay loop (skipped) preset value.. 4988.27 BogoMIPS (lpj=2494138) Feb 13 20:15:08.010599 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Feb 13 20:15:08.010612 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Feb 13 20:15:08.010676 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 13 20:15:08.010693 kernel: Spectre V2 : Mitigation: Retpolines Feb 13 20:15:08.010711 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 13 20:15:08.010734 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 13 20:15:08.010750 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Feb 13 20:15:08.010768 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Feb 13 20:15:08.010784 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Feb 13 20:15:08.010802 kernel: MDS: Mitigation: Clear CPU buffers Feb 13 20:15:08.010819 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 13 20:15:08.010845 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 13 20:15:08.010861 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 13 20:15:08.010877 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 13 20:15:08.010887 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 13 20:15:08.010896 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Feb 13 20:15:08.010906 kernel: Freeing SMP alternatives memory: 32K Feb 13 20:15:08.010915 kernel: pid_max: default: 32768 minimum: 301 Feb 13 20:15:08.010924 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Feb 13 20:15:08.010939 kernel: landlock: Up and running. Feb 13 20:15:08.010948 kernel: SELinux: Initializing. Feb 13 20:15:08.010957 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:15:08.010967 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Feb 13 20:15:08.010976 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Feb 13 20:15:08.010985 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:08.010996 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:08.011009 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Feb 13 20:15:08.011031 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Feb 13 20:15:08.011047 kernel: signal: max sigframe size: 1776 Feb 13 20:15:08.011063 kernel: rcu: Hierarchical SRCU implementation. Feb 13 20:15:08.011079 kernel: rcu: Max phase no-delay instances is 400. Feb 13 20:15:08.011094 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 13 20:15:08.011108 kernel: smp: Bringing up secondary CPUs ... Feb 13 20:15:08.011123 kernel: smpboot: x86: Booting SMP configuration: Feb 13 20:15:08.011136 kernel: .... node #0, CPUs: #1 Feb 13 20:15:08.011146 kernel: smp: Brought up 1 node, 2 CPUs Feb 13 20:15:08.011158 kernel: smpboot: Max logical packages: 1 Feb 13 20:15:08.011175 kernel: smpboot: Total of 2 processors activated (9976.55 BogoMIPS) Feb 13 20:15:08.011184 kernel: devtmpfs: initialized Feb 13 20:15:08.011194 kernel: x86/mm: Memory block size: 128MB Feb 13 20:15:08.011204 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 13 20:15:08.011213 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 13 20:15:08.011222 kernel: pinctrl core: initialized pinctrl subsystem Feb 13 20:15:08.011232 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 13 20:15:08.011241 kernel: audit: initializing netlink subsys (disabled) Feb 13 20:15:08.011250 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 13 20:15:08.011265 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 13 20:15:08.011274 kernel: audit: type=2000 audit(1739477706.809:1): state=initialized audit_enabled=0 res=1 Feb 13 20:15:08.011284 kernel: cpuidle: using governor menu Feb 13 20:15:08.011293 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 13 20:15:08.011302 kernel: dca service started, version 1.12.1 Feb 13 20:15:08.011312 kernel: PCI: Using configuration type 1 for base access Feb 13 20:15:08.011321 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 13 20:15:08.011334 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Feb 13 20:15:08.011350 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Feb 13 20:15:08.011371 kernel: ACPI: Added _OSI(Module Device) Feb 13 20:15:08.011384 kernel: ACPI: Added _OSI(Processor Device) Feb 13 20:15:08.011397 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 13 20:15:08.011410 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 13 20:15:08.011424 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 13 20:15:08.011447 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Feb 13 20:15:08.011461 kernel: ACPI: Interpreter enabled Feb 13 20:15:08.011475 kernel: ACPI: PM: (supports S0 S5) Feb 13 20:15:08.011486 kernel: ACPI: Using IOAPIC for interrupt routing Feb 13 20:15:08.011503 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 13 20:15:08.011512 kernel: PCI: Using E820 reservations for host bridge windows Feb 13 20:15:08.011522 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Feb 13 20:15:08.011531 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Feb 13 20:15:08.011802 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Feb 13 20:15:08.011913 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Feb 13 20:15:08.012011 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Feb 13 20:15:08.012040 kernel: acpiphp: Slot [3] registered Feb 13 20:15:08.012054 kernel: acpiphp: Slot [4] registered Feb 13 20:15:08.012068 kernel: acpiphp: Slot [5] registered Feb 13 20:15:08.012080 kernel: acpiphp: Slot [6] registered Feb 13 20:15:08.012096 kernel: acpiphp: Slot [7] registered Feb 13 20:15:08.012110 kernel: acpiphp: Slot [8] registered Feb 13 20:15:08.012138 kernel: acpiphp: Slot [9] registered Feb 13 20:15:08.012151 kernel: acpiphp: Slot [10] registered Feb 13 20:15:08.012165 kernel: acpiphp: Slot [11] registered Feb 13 20:15:08.012186 kernel: acpiphp: Slot [12] registered Feb 13 20:15:08.012199 kernel: acpiphp: Slot [13] registered Feb 13 20:15:08.012212 kernel: acpiphp: Slot [14] registered Feb 13 20:15:08.012225 kernel: acpiphp: Slot [15] registered Feb 13 20:15:08.012239 kernel: acpiphp: Slot [16] registered Feb 13 20:15:08.012253 kernel: acpiphp: Slot [17] registered Feb 13 20:15:08.012266 kernel: acpiphp: Slot [18] registered Feb 13 20:15:08.012279 kernel: acpiphp: Slot [19] registered Feb 13 20:15:08.012293 kernel: acpiphp: Slot [20] registered Feb 13 20:15:08.012306 kernel: acpiphp: Slot [21] registered Feb 13 20:15:08.012354 kernel: acpiphp: Slot [22] registered Feb 13 20:15:08.012367 kernel: acpiphp: Slot [23] registered Feb 13 20:15:08.012376 kernel: acpiphp: Slot [24] registered Feb 13 20:15:08.012385 kernel: acpiphp: Slot [25] registered Feb 13 20:15:08.012394 kernel: acpiphp: Slot [26] registered Feb 13 20:15:08.012403 kernel: acpiphp: Slot [27] registered Feb 13 20:15:08.012412 kernel: acpiphp: Slot [28] registered Feb 13 20:15:08.012421 kernel: acpiphp: Slot [29] registered Feb 13 20:15:08.012430 kernel: acpiphp: Slot [30] registered Feb 13 20:15:08.012445 kernel: acpiphp: Slot [31] registered Feb 13 20:15:08.012454 kernel: PCI host bridge to bus 0000:00 Feb 13 20:15:08.012671 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Feb 13 20:15:08.012808 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Feb 13 20:15:08.012899 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Feb 13 20:15:08.013030 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Feb 13 20:15:08.013121 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Feb 13 20:15:08.013207 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Feb 13 20:15:08.013369 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Feb 13 20:15:08.013511 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Feb 13 20:15:08.013636 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Feb 13 20:15:08.014529 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Feb 13 20:15:08.016757 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Feb 13 20:15:08.016927 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Feb 13 20:15:08.017075 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Feb 13 20:15:08.017210 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Feb 13 20:15:08.017363 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Feb 13 20:15:08.017524 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Feb 13 20:15:08.018699 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Feb 13 20:15:08.018923 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Feb 13 20:15:08.019110 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Feb 13 20:15:08.019253 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Feb 13 20:15:08.019363 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Feb 13 20:15:08.019506 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Feb 13 20:15:08.020964 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Feb 13 20:15:08.021187 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Feb 13 20:15:08.021353 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Feb 13 20:15:08.021505 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:15:08.021628 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Feb 13 20:15:08.021824 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Feb 13 20:15:08.021991 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Feb 13 20:15:08.022180 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Feb 13 20:15:08.022326 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Feb 13 20:15:08.022441 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Feb 13 20:15:08.022592 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Feb 13 20:15:08.023914 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Feb 13 20:15:08.024106 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Feb 13 20:15:08.024285 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Feb 13 20:15:08.024446 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Feb 13 20:15:08.024604 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:15:08.026838 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Feb 13 20:15:08.027010 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Feb 13 20:15:08.027131 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Feb 13 20:15:08.027275 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Feb 13 20:15:08.027404 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Feb 13 20:15:08.027529 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Feb 13 20:15:08.029664 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Feb 13 20:15:08.029986 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Feb 13 20:15:08.030189 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Feb 13 20:15:08.030340 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Feb 13 20:15:08.030361 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Feb 13 20:15:08.030376 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Feb 13 20:15:08.030392 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Feb 13 20:15:08.030409 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Feb 13 20:15:08.030426 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Feb 13 20:15:08.030457 kernel: iommu: Default domain type: Translated Feb 13 20:15:08.030474 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 13 20:15:08.030491 kernel: PCI: Using ACPI for IRQ routing Feb 13 20:15:08.030508 kernel: PCI: pci_cache_line_size set to 64 bytes Feb 13 20:15:08.030524 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Feb 13 20:15:08.030541 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Feb 13 20:15:08.030726 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Feb 13 20:15:08.030884 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Feb 13 20:15:08.031019 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Feb 13 20:15:08.031032 kernel: vgaarb: loaded Feb 13 20:15:08.031042 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Feb 13 20:15:08.031051 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Feb 13 20:15:08.031061 kernel: clocksource: Switched to clocksource kvm-clock Feb 13 20:15:08.031070 kernel: VFS: Disk quotas dquot_6.6.0 Feb 13 20:15:08.031080 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 13 20:15:08.031089 kernel: pnp: PnP ACPI init Feb 13 20:15:08.031098 kernel: pnp: PnP ACPI: found 4 devices Feb 13 20:15:08.031115 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 13 20:15:08.031125 kernel: NET: Registered PF_INET protocol family Feb 13 20:15:08.031134 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Feb 13 20:15:08.031144 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Feb 13 20:15:08.031153 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 13 20:15:08.031162 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 13 20:15:08.031172 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Feb 13 20:15:08.031184 kernel: TCP: Hash tables configured (established 16384 bind 16384) Feb 13 20:15:08.031197 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:15:08.031218 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Feb 13 20:15:08.031232 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 13 20:15:08.031244 kernel: NET: Registered PF_XDP protocol family Feb 13 20:15:08.031395 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Feb 13 20:15:08.031493 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Feb 13 20:15:08.031582 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Feb 13 20:15:08.031729 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Feb 13 20:15:08.031831 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Feb 13 20:15:08.032056 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Feb 13 20:15:08.032197 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Feb 13 20:15:08.032216 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Feb 13 20:15:08.032354 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7a0 took 40937 usecs Feb 13 20:15:08.032367 kernel: PCI: CLS 0 bytes, default 64 Feb 13 20:15:08.032377 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 13 20:15:08.032389 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39838d43, max_idle_ns: 440795267131 ns Feb 13 20:15:08.032405 kernel: Initialise system trusted keyrings Feb 13 20:15:08.032433 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Feb 13 20:15:08.032446 kernel: Key type asymmetric registered Feb 13 20:15:08.032461 kernel: Asymmetric key parser 'x509' registered Feb 13 20:15:08.032476 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Feb 13 20:15:08.032485 kernel: io scheduler mq-deadline registered Feb 13 20:15:08.032494 kernel: io scheduler kyber registered Feb 13 20:15:08.032503 kernel: io scheduler bfq registered Feb 13 20:15:08.032512 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 13 20:15:08.032522 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Feb 13 20:15:08.032531 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Feb 13 20:15:08.032547 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Feb 13 20:15:08.032556 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 13 20:15:08.032564 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 13 20:15:08.032574 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Feb 13 20:15:08.032583 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Feb 13 20:15:08.033040 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Feb 13 20:15:08.033253 kernel: rtc_cmos 00:03: RTC can wake from S4 Feb 13 20:15:08.033276 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Feb 13 20:15:08.033432 kernel: rtc_cmos 00:03: registered as rtc0 Feb 13 20:15:08.033555 kernel: rtc_cmos 00:03: setting system clock to 2025-02-13T20:15:07 UTC (1739477707) Feb 13 20:15:08.034850 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Feb 13 20:15:08.034881 kernel: intel_pstate: CPU model not supported Feb 13 20:15:08.034892 kernel: NET: Registered PF_INET6 protocol family Feb 13 20:15:08.034904 kernel: Segment Routing with IPv6 Feb 13 20:15:08.034920 kernel: In-situ OAM (IOAM) with IPv6 Feb 13 20:15:08.034935 kernel: NET: Registered PF_PACKET protocol family Feb 13 20:15:08.034963 kernel: Key type dns_resolver registered Feb 13 20:15:08.034972 kernel: IPI shorthand broadcast: enabled Feb 13 20:15:08.034982 kernel: sched_clock: Marking stable (1153005712, 97987796)->(1272615422, -21621914) Feb 13 20:15:08.034991 kernel: registered taskstats version 1 Feb 13 20:15:08.034999 kernel: Loading compiled-in X.509 certificates Feb 13 20:15:08.035009 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.74-flatcar: 6e17590ca2768b672aa48f3e0cedc4061febfe93' Feb 13 20:15:08.035017 kernel: Key type .fscrypt registered Feb 13 20:15:08.035026 kernel: Key type fscrypt-provisioning registered Feb 13 20:15:08.035035 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 13 20:15:08.035049 kernel: ima: Allocated hash algorithm: sha1 Feb 13 20:15:08.035058 kernel: ima: No architecture policies found Feb 13 20:15:08.035067 kernel: clk: Disabling unused clocks Feb 13 20:15:08.035075 kernel: Freeing unused kernel image (initmem) memory: 42840K Feb 13 20:15:08.035085 kernel: Write protecting the kernel read-only data: 36864k Feb 13 20:15:08.035123 kernel: Freeing unused kernel image (rodata/data gap) memory: 1848K Feb 13 20:15:08.035137 kernel: Run /init as init process Feb 13 20:15:08.035146 kernel: with arguments: Feb 13 20:15:08.035156 kernel: /init Feb 13 20:15:08.035169 kernel: with environment: Feb 13 20:15:08.035178 kernel: HOME=/ Feb 13 20:15:08.035189 kernel: TERM=linux Feb 13 20:15:08.035204 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 13 20:15:08.035222 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:15:08.035247 systemd[1]: Detected virtualization kvm. Feb 13 20:15:08.035259 systemd[1]: Detected architecture x86-64. Feb 13 20:15:08.035269 systemd[1]: Running in initrd. Feb 13 20:15:08.035284 systemd[1]: No hostname configured, using default hostname. Feb 13 20:15:08.035293 systemd[1]: Hostname set to . Feb 13 20:15:08.035303 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:15:08.035313 systemd[1]: Queued start job for default target initrd.target. Feb 13 20:15:08.035323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:08.035334 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:08.035345 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Feb 13 20:15:08.035355 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:15:08.035370 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Feb 13 20:15:08.035380 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Feb 13 20:15:08.035391 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Feb 13 20:15:08.035401 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Feb 13 20:15:08.035411 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:08.035421 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:08.035435 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:15:08.035445 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:15:08.035456 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:15:08.035470 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:15:08.035480 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:08.035491 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:08.035506 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Feb 13 20:15:08.035517 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Feb 13 20:15:08.035527 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:08.035537 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:08.035547 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:08.035557 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:15:08.035567 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Feb 13 20:15:08.035578 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:15:08.035594 systemd[1]: Finished network-cleanup.service - Network Cleanup. Feb 13 20:15:08.035603 systemd[1]: Starting systemd-fsck-usr.service... Feb 13 20:15:08.035613 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:15:08.035623 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:15:08.035639 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:08.036718 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:08.036783 systemd-journald[182]: Collecting audit messages is disabled. Feb 13 20:15:08.036831 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:08.036841 systemd[1]: Finished systemd-fsck-usr.service. Feb 13 20:15:08.036853 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:15:08.036871 systemd-journald[182]: Journal started Feb 13 20:15:08.036893 systemd-journald[182]: Runtime Journal (/run/log/journal/9119f44d585347b5852e73f0602439fd) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:15:08.034892 systemd-modules-load[183]: Inserted module 'overlay' Feb 13 20:15:08.041681 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:15:08.081338 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:08.088537 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 13 20:15:08.088580 kernel: Bridge firewalling registered Feb 13 20:15:08.086559 systemd-modules-load[183]: Inserted module 'br_netfilter' Feb 13 20:15:08.088359 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:08.112071 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:08.114853 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:15:08.127917 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:15:08.128887 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:08.143019 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:15:08.151772 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:08.153683 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:08.159905 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Feb 13 20:15:08.163202 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:08.174930 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:15:08.177716 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:08.197978 dracut-cmdline[214]: dracut-dracut-053 Feb 13 20:15:08.208693 dracut-cmdline[214]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=a8740cbac5121ade856b040634ad9badacd879298c24f899668a59d96c178b13 Feb 13 20:15:08.239846 systemd-resolved[217]: Positive Trust Anchors: Feb 13 20:15:08.240799 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:15:08.241539 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:15:08.247681 systemd-resolved[217]: Defaulting to hostname 'linux'. Feb 13 20:15:08.250198 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:15:08.250854 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:08.320693 kernel: SCSI subsystem initialized Feb 13 20:15:08.331682 kernel: Loading iSCSI transport class v2.0-870. Feb 13 20:15:08.348143 kernel: iscsi: registered transport (tcp) Feb 13 20:15:08.373815 kernel: iscsi: registered transport (qla4xxx) Feb 13 20:15:08.373919 kernel: QLogic iSCSI HBA Driver Feb 13 20:15:08.432498 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:08.440030 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Feb 13 20:15:08.491898 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 13 20:15:08.492005 kernel: device-mapper: uevent: version 1.0.3 Feb 13 20:15:08.492020 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Feb 13 20:15:08.552781 kernel: raid6: avx2x4 gen() 15221 MB/s Feb 13 20:15:08.568740 kernel: raid6: avx2x2 gen() 18764 MB/s Feb 13 20:15:08.585799 kernel: raid6: avx2x1 gen() 17516 MB/s Feb 13 20:15:08.585943 kernel: raid6: using algorithm avx2x2 gen() 18764 MB/s Feb 13 20:15:08.604017 kernel: raid6: .... xor() 17845 MB/s, rmw enabled Feb 13 20:15:08.604242 kernel: raid6: using avx2x2 recovery algorithm Feb 13 20:15:08.628697 kernel: xor: automatically using best checksumming function avx Feb 13 20:15:08.832722 kernel: Btrfs loaded, zoned=no, fsverity=no Feb 13 20:15:08.852031 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:08.859062 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:08.891785 systemd-udevd[401]: Using default interface naming scheme 'v255'. Feb 13 20:15:08.898499 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:08.906962 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Feb 13 20:15:08.941418 dracut-pre-trigger[409]: rd.md=0: removing MD RAID activation Feb 13 20:15:08.995191 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:09.001006 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:15:09.093511 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:09.102116 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Feb 13 20:15:09.147669 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:09.150996 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:09.152314 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:09.153703 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:15:09.159977 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Feb 13 20:15:09.206642 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:09.215685 kernel: scsi host0: Virtio SCSI HBA Feb 13 20:15:09.228691 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Feb 13 20:15:09.322569 kernel: libata version 3.00 loaded. Feb 13 20:15:09.322598 kernel: cryptd: max_cpu_qlen set to 1000 Feb 13 20:15:09.322612 kernel: ata_piix 0000:00:01.1: version 2.13 Feb 13 20:15:09.323187 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Feb 13 20:15:09.323355 kernel: scsi host1: ata_piix Feb 13 20:15:09.323575 kernel: scsi host2: ata_piix Feb 13 20:15:09.323813 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Feb 13 20:15:09.323830 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Feb 13 20:15:09.323843 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Feb 13 20:15:09.323856 kernel: GPT:9289727 != 125829119 Feb 13 20:15:09.323868 kernel: GPT:Alternate GPT header not at the end of the disk. Feb 13 20:15:09.323880 kernel: GPT:9289727 != 125829119 Feb 13 20:15:09.323893 kernel: GPT: Use GNU Parted to correct GPT errors. Feb 13 20:15:09.323913 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:09.323949 kernel: ACPI: bus type USB registered Feb 13 20:15:09.323962 kernel: usbcore: registered new interface driver usbfs Feb 13 20:15:09.323975 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Feb 13 20:15:09.336272 kernel: usbcore: registered new interface driver hub Feb 13 20:15:09.336306 kernel: usbcore: registered new device driver usb Feb 13 20:15:09.336327 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Feb 13 20:15:09.339786 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:09.339972 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:09.342332 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:09.343933 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:09.344262 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:09.344917 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:09.360732 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:09.422732 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:09.431328 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Feb 13 20:15:09.489325 kernel: AVX2 version of gcm_enc/dec engaged. Feb 13 20:15:09.489423 kernel: AES CTR mode by8 optimization enabled Feb 13 20:15:09.488243 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:09.524689 kernel: BTRFS: device fsid 892c7470-7713-4b0f-880a-4c5f7bf5b72d devid 1 transid 37 /dev/vda3 scanned by (udev-worker) (458) Feb 13 20:15:09.534322 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (457) Feb 13 20:15:09.556766 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Feb 13 20:15:09.571412 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Feb 13 20:15:09.588697 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Feb 13 20:15:09.591689 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Feb 13 20:15:09.620690 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Feb 13 20:15:09.621126 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Feb 13 20:15:09.621348 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Feb 13 20:15:09.622705 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Feb 13 20:15:09.623036 kernel: hub 1-0:1.0: USB hub found Feb 13 20:15:09.623265 kernel: hub 1-0:1.0: 2 ports detected Feb 13 20:15:09.612574 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:15:09.627082 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Feb 13 20:15:09.648751 disk-uuid[548]: Primary Header is updated. Feb 13 20:15:09.648751 disk-uuid[548]: Secondary Entries is updated. Feb 13 20:15:09.648751 disk-uuid[548]: Secondary Header is updated. Feb 13 20:15:09.664775 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:09.673613 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:10.689707 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Feb 13 20:15:10.690324 disk-uuid[549]: The operation has completed successfully. Feb 13 20:15:10.760239 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 13 20:15:10.760382 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Feb 13 20:15:10.772275 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Feb 13 20:15:10.782486 sh[564]: Success Feb 13 20:15:10.800758 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 13 20:15:10.893176 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Feb 13 20:15:10.896856 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Feb 13 20:15:10.898043 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Feb 13 20:15:10.937718 kernel: BTRFS info (device dm-0): first mount of filesystem 892c7470-7713-4b0f-880a-4c5f7bf5b72d Feb 13 20:15:10.937841 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:10.937864 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Feb 13 20:15:10.937883 kernel: BTRFS info (device dm-0): disabling log replay at mount time Feb 13 20:15:10.938196 kernel: BTRFS info (device dm-0): using free space tree Feb 13 20:15:10.949817 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Feb 13 20:15:10.951493 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Feb 13 20:15:10.958040 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Feb 13 20:15:10.960876 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Feb 13 20:15:10.985681 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:10.985782 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:10.985847 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:15:10.992703 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:15:11.009805 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 13 20:15:11.012845 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:11.025495 systemd[1]: Finished ignition-setup.service - Ignition (setup). Feb 13 20:15:11.036062 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Feb 13 20:15:11.168722 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:15:11.180182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:15:11.215424 systemd-networkd[748]: lo: Link UP Feb 13 20:15:11.216508 systemd-networkd[748]: lo: Gained carrier Feb 13 20:15:11.221084 systemd-networkd[748]: Enumeration completed Feb 13 20:15:11.222075 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:15:11.223310 systemd[1]: Reached target network.target - Network. Feb 13 20:15:11.224364 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:15:11.224371 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Feb 13 20:15:11.227750 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:11.227757 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 13 20:15:11.230058 systemd-networkd[748]: eth0: Link UP Feb 13 20:15:11.230064 systemd-networkd[748]: eth0: Gained carrier Feb 13 20:15:11.230080 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Feb 13 20:15:11.236402 ignition[658]: Ignition 2.19.0 Feb 13 20:15:11.236904 systemd-networkd[748]: eth1: Link UP Feb 13 20:15:11.236456 ignition[658]: Stage: fetch-offline Feb 13 20:15:11.236912 systemd-networkd[748]: eth1: Gained carrier Feb 13 20:15:11.236540 ignition[658]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:11.236933 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Feb 13 20:15:11.236556 ignition[658]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:11.243438 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:15:11.237759 ignition[658]: parsed url from cmdline: "" Feb 13 20:15:11.237766 ignition[658]: no config URL provided Feb 13 20:15:11.237778 ignition[658]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:15:11.237797 ignition[658]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:15:11.237818 ignition[658]: failed to fetch config: resource requires networking Feb 13 20:15:11.238366 ignition[658]: Ignition finished successfully Feb 13 20:15:11.251826 systemd-networkd[748]: eth0: DHCPv4 address 147.182.243.214/20, gateway 147.182.240.1 acquired from 169.254.169.253 Feb 13 20:15:11.253064 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Feb 13 20:15:11.256814 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.10/20 acquired from 169.254.169.253 Feb 13 20:15:11.285867 ignition[756]: Ignition 2.19.0 Feb 13 20:15:11.285895 ignition[756]: Stage: fetch Feb 13 20:15:11.286215 ignition[756]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:11.286233 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:11.286396 ignition[756]: parsed url from cmdline: "" Feb 13 20:15:11.286403 ignition[756]: no config URL provided Feb 13 20:15:11.286411 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" Feb 13 20:15:11.286425 ignition[756]: no config at "/usr/lib/ignition/user.ign" Feb 13 20:15:11.286452 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Feb 13 20:15:11.304902 ignition[756]: GET result: OK Feb 13 20:15:11.305032 ignition[756]: parsing config with SHA512: af1d60d48d2dcf2ab621b26bf86ee5429c231fdfe948f39e06919ce2eea6e00a58c3e9576c9fce4fd144f54fb9c34602a2763d7c2dc6a793eb4f0c03c91f699e Feb 13 20:15:11.312961 unknown[756]: fetched base config from "system" Feb 13 20:15:11.312976 unknown[756]: fetched base config from "system" Feb 13 20:15:11.313318 ignition[756]: fetch: fetch complete Feb 13 20:15:11.312984 unknown[756]: fetched user config from "digitalocean" Feb 13 20:15:11.313328 ignition[756]: fetch: fetch passed Feb 13 20:15:11.316302 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Feb 13 20:15:11.313396 ignition[756]: Ignition finished successfully Feb 13 20:15:11.324225 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Feb 13 20:15:11.373545 ignition[763]: Ignition 2.19.0 Feb 13 20:15:11.373559 ignition[763]: Stage: kargs Feb 13 20:15:11.373911 ignition[763]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:11.373927 ignition[763]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:11.374907 ignition[763]: kargs: kargs passed Feb 13 20:15:11.374990 ignition[763]: Ignition finished successfully Feb 13 20:15:11.377848 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Feb 13 20:15:11.385021 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Feb 13 20:15:11.428837 ignition[769]: Ignition 2.19.0 Feb 13 20:15:11.428854 ignition[769]: Stage: disks Feb 13 20:15:11.429210 ignition[769]: no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:11.429229 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:11.430539 ignition[769]: disks: disks passed Feb 13 20:15:11.432439 systemd[1]: Finished ignition-disks.service - Ignition (disks). Feb 13 20:15:11.430672 ignition[769]: Ignition finished successfully Feb 13 20:15:11.438747 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Feb 13 20:15:11.440057 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Feb 13 20:15:11.441394 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:15:11.442408 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:15:11.443234 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:15:11.453102 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Feb 13 20:15:11.478780 systemd-fsck[778]: ROOT: clean, 14/553520 files, 52654/553472 blocks Feb 13 20:15:11.482402 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Feb 13 20:15:11.492011 systemd[1]: Mounting sysroot.mount - /sysroot... Feb 13 20:15:11.619695 kernel: EXT4-fs (vda9): mounted filesystem 85215ce4-0be3-4782-863e-8dde129924f0 r/w with ordered data mode. Quota mode: none. Feb 13 20:15:11.620675 systemd[1]: Mounted sysroot.mount - /sysroot. Feb 13 20:15:11.622017 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Feb 13 20:15:11.631890 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:15:11.635912 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Feb 13 20:15:11.643033 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... Feb 13 20:15:11.651074 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Feb 13 20:15:11.653081 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (786) Feb 13 20:15:11.651803 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 13 20:15:11.651862 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:15:11.657411 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Feb 13 20:15:11.665381 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:11.665427 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:11.665445 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:15:11.668727 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:15:11.674089 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Feb 13 20:15:11.678579 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:15:11.771423 coreos-metadata[788]: Feb 13 20:15:11.771 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:15:11.787686 coreos-metadata[788]: Feb 13 20:15:11.785 INFO Fetch successful Feb 13 20:15:11.799771 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. Feb 13 20:15:11.800262 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. Feb 13 20:15:11.803523 coreos-metadata[789]: Feb 13 20:15:11.800 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:15:11.805306 initrd-setup-root[816]: cut: /sysroot/etc/passwd: No such file or directory Feb 13 20:15:11.810789 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory Feb 13 20:15:11.816700 coreos-metadata[789]: Feb 13 20:15:11.815 INFO Fetch successful Feb 13 20:15:11.820537 initrd-setup-root[831]: cut: /sysroot/etc/shadow: No such file or directory Feb 13 20:15:11.822846 coreos-metadata[789]: Feb 13 20:15:11.821 INFO wrote hostname ci-4081.3.1-9-0c9fce155b to /sysroot/etc/hostname Feb 13 20:15:11.824236 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:15:11.833040 initrd-setup-root[839]: cut: /sysroot/etc/gshadow: No such file or directory Feb 13 20:15:11.996068 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Feb 13 20:15:12.002905 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Feb 13 20:15:12.005821 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Feb 13 20:15:12.038697 kernel: BTRFS info (device vda6): last unmount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:12.038754 systemd[1]: sysroot-oem.mount: Deactivated successfully. Feb 13 20:15:12.064328 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Feb 13 20:15:12.084678 ignition[907]: INFO : Ignition 2.19.0 Feb 13 20:15:12.084678 ignition[907]: INFO : Stage: mount Feb 13 20:15:12.086406 ignition[907]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:12.086406 ignition[907]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:12.086406 ignition[907]: INFO : mount: mount passed Feb 13 20:15:12.088637 ignition[907]: INFO : Ignition finished successfully Feb 13 20:15:12.088194 systemd[1]: Finished ignition-mount.service - Ignition (mount). Feb 13 20:15:12.094918 systemd[1]: Starting ignition-files.service - Ignition (files)... Feb 13 20:15:12.136566 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Feb 13 20:15:12.154728 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (919) Feb 13 20:15:12.158900 kernel: BTRFS info (device vda6): first mount of filesystem b405b664-b121-4411-9ed3-1128bc9da790 Feb 13 20:15:12.159035 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Feb 13 20:15:12.160758 kernel: BTRFS info (device vda6): using free space tree Feb 13 20:15:12.165727 kernel: BTRFS info (device vda6): auto enabling async discard Feb 13 20:15:12.169039 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Feb 13 20:15:12.207100 ignition[936]: INFO : Ignition 2.19.0 Feb 13 20:15:12.207100 ignition[936]: INFO : Stage: files Feb 13 20:15:12.208915 ignition[936]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:12.208915 ignition[936]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:12.208915 ignition[936]: DEBUG : files: compiled without relabeling support, skipping Feb 13 20:15:12.215141 ignition[936]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 13 20:15:12.215141 ignition[936]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 13 20:15:12.220290 ignition[936]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 13 20:15:12.221098 ignition[936]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 13 20:15:12.222313 ignition[936]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 13 20:15:12.221106 unknown[936]: wrote ssh authorized keys file for user: core Feb 13 20:15:12.224033 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/home/core/install.sh" Feb 13 20:15:12.224033 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/home/core/install.sh" Feb 13 20:15:12.224033 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:15:12.228232 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 13 20:15:12.228232 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:15:12.228232 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:15:12.228232 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:15:12.228232 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.32.0-x86-64.raw: attempt #1 Feb 13 20:15:12.393212 systemd-networkd[748]: eth0: Gained IPv6LL Feb 13 20:15:12.724996 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 13 20:15:13.072374 ignition[936]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.0-x86-64.raw" Feb 13 20:15:13.072374 ignition[936]: INFO : files: createResultFile: createFiles: op(7): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:15:13.074451 ignition[936]: INFO : files: createResultFile: createFiles: op(7): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 13 20:15:13.074451 ignition[936]: INFO : files: files passed Feb 13 20:15:13.074451 ignition[936]: INFO : Ignition finished successfully Feb 13 20:15:13.074528 systemd[1]: Finished ignition-files.service - Ignition (files). Feb 13 20:15:13.082024 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Feb 13 20:15:13.083921 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Feb 13 20:15:13.098102 systemd-networkd[748]: eth1: Gained IPv6LL Feb 13 20:15:13.106479 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 13 20:15:13.106728 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Feb 13 20:15:13.117049 initrd-setup-root-after-ignition[964]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.117049 initrd-setup-root-after-ignition[964]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.120454 initrd-setup-root-after-ignition[968]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 13 20:15:13.123198 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:15:13.125044 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Feb 13 20:15:13.132278 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Feb 13 20:15:13.201897 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 13 20:15:13.202091 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Feb 13 20:15:13.204485 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Feb 13 20:15:13.205455 systemd[1]: Reached target initrd.target - Initrd Default Target. Feb 13 20:15:13.206523 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Feb 13 20:15:13.219117 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Feb 13 20:15:13.242683 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:15:13.250053 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Feb 13 20:15:13.279447 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:13.280195 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:13.281317 systemd[1]: Stopped target timers.target - Timer Units. Feb 13 20:15:13.282285 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 13 20:15:13.282617 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Feb 13 20:15:13.284003 systemd[1]: Stopped target initrd.target - Initrd Default Target. Feb 13 20:15:13.284735 systemd[1]: Stopped target basic.target - Basic System. Feb 13 20:15:13.285534 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Feb 13 20:15:13.286349 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Feb 13 20:15:13.287361 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Feb 13 20:15:13.288447 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Feb 13 20:15:13.289309 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Feb 13 20:15:13.290347 systemd[1]: Stopped target sysinit.target - System Initialization. Feb 13 20:15:13.291236 systemd[1]: Stopped target local-fs.target - Local File Systems. Feb 13 20:15:13.292287 systemd[1]: Stopped target swap.target - Swaps. Feb 13 20:15:13.292908 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 13 20:15:13.293213 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Feb 13 20:15:13.294699 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:13.295736 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:13.296442 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Feb 13 20:15:13.296576 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:13.297571 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 13 20:15:13.297865 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Feb 13 20:15:13.299289 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 13 20:15:13.299517 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Feb 13 20:15:13.300816 systemd[1]: ignition-files.service: Deactivated successfully. Feb 13 20:15:13.301055 systemd[1]: Stopped ignition-files.service - Ignition (files). Feb 13 20:15:13.301958 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 13 20:15:13.302147 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Feb 13 20:15:13.316354 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Feb 13 20:15:13.322113 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Feb 13 20:15:13.322785 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 13 20:15:13.323062 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:13.326119 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 13 20:15:13.326362 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Feb 13 20:15:13.337816 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 13 20:15:13.338083 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Feb 13 20:15:13.351554 ignition[988]: INFO : Ignition 2.19.0 Feb 13 20:15:13.351554 ignition[988]: INFO : Stage: umount Feb 13 20:15:13.354746 ignition[988]: INFO : no configs at "/usr/lib/ignition/base.d" Feb 13 20:15:13.354746 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Feb 13 20:15:13.354746 ignition[988]: INFO : umount: umount passed Feb 13 20:15:13.354746 ignition[988]: INFO : Ignition finished successfully Feb 13 20:15:13.354939 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 13 20:15:13.355105 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Feb 13 20:15:13.358079 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 13 20:15:13.358275 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Feb 13 20:15:13.361905 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 13 20:15:13.362040 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Feb 13 20:15:13.363194 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 13 20:15:13.363271 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Feb 13 20:15:13.364587 systemd[1]: Stopped target network.target - Network. Feb 13 20:15:13.365636 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 13 20:15:13.365782 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Feb 13 20:15:13.366929 systemd[1]: Stopped target paths.target - Path Units. Feb 13 20:15:13.368312 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 13 20:15:13.373445 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:13.374784 systemd[1]: Stopped target slices.target - Slice Units. Feb 13 20:15:13.391939 systemd[1]: Stopped target sockets.target - Socket Units. Feb 13 20:15:13.393260 systemd[1]: iscsid.socket: Deactivated successfully. Feb 13 20:15:13.393350 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Feb 13 20:15:13.393954 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 13 20:15:13.394004 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Feb 13 20:15:13.394675 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 13 20:15:13.394771 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Feb 13 20:15:13.395418 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Feb 13 20:15:13.395494 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Feb 13 20:15:13.396532 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Feb 13 20:15:13.397163 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Feb 13 20:15:13.400195 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 13 20:15:13.401390 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 13 20:15:13.401616 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Feb 13 20:15:13.402795 systemd-networkd[748]: eth1: DHCPv6 lease lost Feb 13 20:15:13.406111 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 13 20:15:13.406243 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Feb 13 20:15:13.407080 systemd-networkd[748]: eth0: DHCPv6 lease lost Feb 13 20:15:13.410135 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 13 20:15:13.410280 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Feb 13 20:15:13.412066 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Feb 13 20:15:13.412314 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:13.413828 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 13 20:15:13.414039 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Feb 13 20:15:13.415935 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 13 20:15:13.416026 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:13.423975 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Feb 13 20:15:13.424611 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 13 20:15:13.425906 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Feb 13 20:15:13.426570 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 13 20:15:13.426705 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:13.428607 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 13 20:15:13.428721 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:13.430105 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:13.450241 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 13 20:15:13.457500 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:13.459473 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 13 20:15:13.459692 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Feb 13 20:15:13.462362 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 13 20:15:13.462450 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:13.462965 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 13 20:15:13.463024 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:13.463965 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 13 20:15:13.464058 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Feb 13 20:15:13.465710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 13 20:15:13.465816 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Feb 13 20:15:13.467024 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 13 20:15:13.467112 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Feb 13 20:15:13.475060 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Feb 13 20:15:13.475765 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 13 20:15:13.475912 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:13.479267 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Feb 13 20:15:13.479386 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:13.481037 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 13 20:15:13.481164 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:13.482866 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:13.482973 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:13.488877 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 13 20:15:13.489783 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Feb 13 20:15:13.491306 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Feb 13 20:15:13.498088 systemd[1]: Starting initrd-switch-root.service - Switch Root... Feb 13 20:15:13.513588 systemd[1]: Switching root. Feb 13 20:15:13.558890 systemd-journald[182]: Journal stopped Feb 13 20:15:14.880116 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Feb 13 20:15:14.880206 kernel: SELinux: policy capability network_peer_controls=1 Feb 13 20:15:14.880229 kernel: SELinux: policy capability open_perms=1 Feb 13 20:15:14.880241 kernel: SELinux: policy capability extended_socket_class=1 Feb 13 20:15:14.880270 kernel: SELinux: policy capability always_check_network=0 Feb 13 20:15:14.880282 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 13 20:15:14.880294 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 13 20:15:14.880306 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 13 20:15:14.880330 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 13 20:15:14.880349 kernel: audit: type=1403 audit(1739477713.717:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 13 20:15:14.880374 systemd[1]: Successfully loaded SELinux policy in 42.448ms. Feb 13 20:15:14.880406 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 16.507ms. Feb 13 20:15:14.880428 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Feb 13 20:15:14.880465 systemd[1]: Detected virtualization kvm. Feb 13 20:15:14.880487 systemd[1]: Detected architecture x86-64. Feb 13 20:15:14.880508 systemd[1]: Detected first boot. Feb 13 20:15:14.880534 systemd[1]: Hostname set to . Feb 13 20:15:14.880548 systemd[1]: Initializing machine ID from VM UUID. Feb 13 20:15:14.880565 zram_generator::config[1038]: No configuration found. Feb 13 20:15:14.880580 systemd[1]: Populated /etc with preset unit settings. Feb 13 20:15:14.880600 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 13 20:15:14.880614 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Feb 13 20:15:14.880629 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 13 20:15:14.889275 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Feb 13 20:15:14.889357 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Feb 13 20:15:14.889374 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Feb 13 20:15:14.889387 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Feb 13 20:15:14.889402 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Feb 13 20:15:14.889415 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Feb 13 20:15:14.889458 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Feb 13 20:15:14.889478 systemd[1]: Created slice user.slice - User and Session Slice. Feb 13 20:15:14.889496 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Feb 13 20:15:14.889515 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Feb 13 20:15:14.889532 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Feb 13 20:15:14.889550 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Feb 13 20:15:14.889568 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Feb 13 20:15:14.889591 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Feb 13 20:15:14.889611 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Feb 13 20:15:14.889640 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Feb 13 20:15:14.893396 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Feb 13 20:15:14.893419 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Feb 13 20:15:14.893433 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Feb 13 20:15:14.893447 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Feb 13 20:15:14.893460 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Feb 13 20:15:14.893493 systemd[1]: Reached target remote-fs.target - Remote File Systems. Feb 13 20:15:14.893508 systemd[1]: Reached target slices.target - Slice Units. Feb 13 20:15:14.893521 systemd[1]: Reached target swap.target - Swaps. Feb 13 20:15:14.893536 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Feb 13 20:15:14.893549 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Feb 13 20:15:14.893563 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Feb 13 20:15:14.893578 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Feb 13 20:15:14.893591 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Feb 13 20:15:14.893604 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Feb 13 20:15:14.893625 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Feb 13 20:15:14.893639 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Feb 13 20:15:14.893667 systemd[1]: Mounting media.mount - External Media Directory... Feb 13 20:15:14.893681 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:14.893694 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Feb 13 20:15:14.893706 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Feb 13 20:15:14.893720 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Feb 13 20:15:14.893735 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 13 20:15:14.893749 systemd[1]: Reached target machines.target - Containers. Feb 13 20:15:14.893769 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Feb 13 20:15:14.893782 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:14.893795 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Feb 13 20:15:14.893808 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Feb 13 20:15:14.893822 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:15:14.893837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:15:14.893851 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:15:14.893863 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Feb 13 20:15:14.893883 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:15:14.893898 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 13 20:15:14.893912 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 13 20:15:14.893926 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Feb 13 20:15:14.893939 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 13 20:15:14.893952 systemd[1]: Stopped systemd-fsck-usr.service. Feb 13 20:15:14.893966 systemd[1]: Starting systemd-journald.service - Journal Service... Feb 13 20:15:14.893978 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Feb 13 20:15:14.894008 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Feb 13 20:15:14.894032 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Feb 13 20:15:14.894046 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Feb 13 20:15:14.894061 systemd[1]: verity-setup.service: Deactivated successfully. Feb 13 20:15:14.894083 kernel: loop: module loaded Feb 13 20:15:14.894104 systemd[1]: Stopped verity-setup.service. Feb 13 20:15:14.894124 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:14.894145 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Feb 13 20:15:14.894164 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Feb 13 20:15:14.894182 systemd[1]: Mounted media.mount - External Media Directory. Feb 13 20:15:14.894206 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Feb 13 20:15:14.894219 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Feb 13 20:15:14.894232 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Feb 13 20:15:14.894247 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Feb 13 20:15:14.894320 systemd-journald[1100]: Collecting audit messages is disabled. Feb 13 20:15:14.894351 kernel: fuse: init (API version 7.39) Feb 13 20:15:14.894364 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 13 20:15:14.894377 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Feb 13 20:15:14.894390 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:15:14.894423 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:15:14.894463 systemd-journald[1100]: Journal started Feb 13 20:15:14.894499 systemd-journald[1100]: Runtime Journal (/run/log/journal/9119f44d585347b5852e73f0602439fd) is 4.9M, max 39.3M, 34.4M free. Feb 13 20:15:14.910531 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:15:14.534097 systemd[1]: Queued start job for default target multi-user.target. Feb 13 20:15:14.559904 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Feb 13 20:15:14.913165 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:15:14.560586 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 13 20:15:14.923429 systemd[1]: Started systemd-journald.service - Journal Service. Feb 13 20:15:14.917023 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 13 20:15:14.918766 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Feb 13 20:15:14.920190 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:15:14.920438 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:15:14.922545 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Feb 13 20:15:14.925533 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Feb 13 20:15:14.941777 kernel: ACPI: bus type drm_connector registered Feb 13 20:15:14.942923 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:15:14.943412 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:15:14.945664 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Feb 13 20:15:14.979440 systemd[1]: Reached target network-pre.target - Preparation for Network. Feb 13 20:15:14.987889 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Feb 13 20:15:15.001905 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Feb 13 20:15:15.004422 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 13 20:15:15.004487 systemd[1]: Reached target local-fs.target - Local File Systems. Feb 13 20:15:15.016731 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Feb 13 20:15:15.024990 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Feb 13 20:15:15.034988 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Feb 13 20:15:15.036037 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:15:15.042969 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Feb 13 20:15:15.049038 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Feb 13 20:15:15.050291 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:15:15.059887 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Feb 13 20:15:15.060443 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:15:15.064281 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Feb 13 20:15:15.069855 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Feb 13 20:15:15.076534 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Feb 13 20:15:15.085325 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Feb 13 20:15:15.092083 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Feb 13 20:15:15.093195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Feb 13 20:15:15.105293 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Feb 13 20:15:15.123354 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Feb 13 20:15:15.140632 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Feb 13 20:15:15.150305 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Feb 13 20:15:15.162144 systemd-journald[1100]: Time spent on flushing to /var/log/journal/9119f44d585347b5852e73f0602439fd is 121.987ms for 975 entries. Feb 13 20:15:15.162144 systemd-journald[1100]: System Journal (/var/log/journal/9119f44d585347b5852e73f0602439fd) is 8.0M, max 195.6M, 187.6M free. Feb 13 20:15:15.312108 systemd-journald[1100]: Received client request to flush runtime journal. Feb 13 20:15:15.312238 kernel: loop0: detected capacity change from 0 to 142488 Feb 13 20:15:15.312263 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Feb 13 20:15:15.163810 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Feb 13 20:15:15.173940 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Feb 13 20:15:15.244205 udevadm[1160]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 13 20:15:15.261307 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Feb 13 20:15:15.312424 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 13 20:15:15.313527 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Feb 13 20:15:15.321779 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Feb 13 20:15:15.324032 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Feb 13 20:15:15.324055 systemd-tmpfiles[1151]: ACLs are not supported, ignoring. Feb 13 20:15:15.336994 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Feb 13 20:15:15.349389 kernel: loop1: detected capacity change from 0 to 140768 Feb 13 20:15:15.347130 systemd[1]: Starting systemd-sysusers.service - Create System Users... Feb 13 20:15:15.411002 kernel: loop2: detected capacity change from 0 to 218376 Feb 13 20:15:15.420298 systemd[1]: Finished systemd-sysusers.service - Create System Users. Feb 13 20:15:15.435148 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Feb 13 20:15:15.472209 kernel: loop3: detected capacity change from 0 to 8 Feb 13 20:15:15.513858 kernel: loop4: detected capacity change from 0 to 142488 Feb 13 20:15:15.541818 kernel: loop5: detected capacity change from 0 to 140768 Feb 13 20:15:15.558709 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 20:15:15.559264 systemd-tmpfiles[1175]: ACLs are not supported, ignoring. Feb 13 20:15:15.572354 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Feb 13 20:15:15.581758 kernel: loop6: detected capacity change from 0 to 218376 Feb 13 20:15:15.609680 kernel: loop7: detected capacity change from 0 to 8 Feb 13 20:15:15.611626 (sd-merge)[1178]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Feb 13 20:15:15.612548 (sd-merge)[1178]: Merged extensions into '/usr'. Feb 13 20:15:15.623187 systemd[1]: Reloading requested from client PID 1150 ('systemd-sysext') (unit systemd-sysext.service)... Feb 13 20:15:15.623485 systemd[1]: Reloading... Feb 13 20:15:15.811873 zram_generator::config[1209]: No configuration found. Feb 13 20:15:16.071039 ldconfig[1145]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 13 20:15:16.075025 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:16.140890 systemd[1]: Reloading finished in 516 ms. Feb 13 20:15:16.179361 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Feb 13 20:15:16.184658 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Feb 13 20:15:16.194097 systemd[1]: Starting ensure-sysext.service... Feb 13 20:15:16.206603 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Feb 13 20:15:16.229937 systemd[1]: Reloading requested from client PID 1249 ('systemctl') (unit ensure-sysext.service)... Feb 13 20:15:16.229961 systemd[1]: Reloading... Feb 13 20:15:16.290404 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 13 20:15:16.291210 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Feb 13 20:15:16.296228 systemd-tmpfiles[1250]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 13 20:15:16.300763 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Feb 13 20:15:16.300964 systemd-tmpfiles[1250]: ACLs are not supported, ignoring. Feb 13 20:15:16.315563 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:15:16.315580 systemd-tmpfiles[1250]: Skipping /boot Feb 13 20:15:16.361942 systemd-tmpfiles[1250]: Detected autofs mount point /boot during canonicalization of boot. Feb 13 20:15:16.361958 systemd-tmpfiles[1250]: Skipping /boot Feb 13 20:15:16.376715 zram_generator::config[1276]: No configuration found. Feb 13 20:15:16.608032 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:16.682083 systemd[1]: Reloading finished in 451 ms. Feb 13 20:15:16.702370 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Feb 13 20:15:16.725800 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:15:16.740869 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Feb 13 20:15:16.746091 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Feb 13 20:15:16.754218 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Feb 13 20:15:16.766222 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Feb 13 20:15:16.769513 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Feb 13 20:15:16.793520 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Feb 13 20:15:16.799382 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:16.799796 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:16.808349 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:15:16.817393 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:15:16.821539 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:15:16.823009 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:15:16.833187 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Feb 13 20:15:16.833747 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:16.841774 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:16.842142 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:16.842505 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:15:16.844875 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:16.849988 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Feb 13 20:15:16.860579 systemd[1]: Finished ensure-sysext.service. Feb 13 20:15:16.865970 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:16.866272 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:16.877153 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Feb 13 20:15:16.878057 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:15:16.890189 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Feb 13 20:15:16.892157 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:16.895087 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Feb 13 20:15:16.896628 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:15:16.898176 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:15:16.907216 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:15:16.919990 systemd[1]: Starting systemd-update-done.service - Update is Completed... Feb 13 20:15:16.921748 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Feb 13 20:15:16.925415 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:15:16.951384 systemd-udevd[1336]: Using default interface naming scheme 'v255'. Feb 13 20:15:16.955305 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:15:16.955615 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:15:16.975226 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 13 20:15:16.978376 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Feb 13 20:15:16.997823 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:15:16.999800 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:15:17.005864 augenrules[1357]: No rules Feb 13 20:15:17.002582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:15:17.007107 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:15:17.008607 systemd[1]: Finished systemd-update-done.service - Update is Completed. Feb 13 20:15:17.015174 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Feb 13 20:15:17.027989 systemd[1]: Starting systemd-networkd.service - Network Configuration... Feb 13 20:15:17.073048 systemd[1]: Started systemd-userdbd.service - User Database Manager. Feb 13 20:15:17.224004 systemd-networkd[1367]: lo: Link UP Feb 13 20:15:17.224721 systemd-networkd[1367]: lo: Gained carrier Feb 13 20:15:17.226376 systemd-networkd[1367]: Enumeration completed Feb 13 20:15:17.226736 systemd[1]: Started systemd-networkd.service - Network Configuration. Feb 13 20:15:17.235996 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Feb 13 20:15:17.335263 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Feb 13 20:15:17.336139 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Feb 13 20:15:17.337186 systemd[1]: Reached target time-set.target - System Time Set. Feb 13 20:15:17.339462 systemd-resolved[1329]: Positive Trust Anchors: Feb 13 20:15:17.339482 systemd-resolved[1329]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 13 20:15:17.339547 systemd-resolved[1329]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Feb 13 20:15:17.354498 systemd-resolved[1329]: Using system hostname 'ci-4081.3.1-9-0c9fce155b'. Feb 13 20:15:17.359378 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1382) Feb 13 20:15:17.367579 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Feb 13 20:15:17.371064 systemd[1]: Reached target network.target - Network. Feb 13 20:15:17.371659 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Feb 13 20:15:17.383928 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Feb 13 20:15:17.384775 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:17.385139 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Feb 13 20:15:17.393063 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Feb 13 20:15:17.401049 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Feb 13 20:15:17.411025 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Feb 13 20:15:17.411824 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Feb 13 20:15:17.411896 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 13 20:15:17.411922 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 13 20:15:17.444990 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 13 20:15:17.445759 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Feb 13 20:15:17.461104 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 13 20:15:17.461431 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Feb 13 20:15:17.463394 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 13 20:15:17.464374 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Feb 13 20:15:17.472449 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 13 20:15:17.472582 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Feb 13 20:15:17.494712 systemd-networkd[1367]: eth1: Configuring with /run/systemd/network/10-0a:32:3c:50:22:df.network. Feb 13 20:15:17.497860 systemd-networkd[1367]: eth1: Link UP Feb 13 20:15:17.497877 systemd-networkd[1367]: eth1: Gained carrier Feb 13 20:15:17.499843 systemd-timesyncd[1347]: Network configuration changed, trying to establish connection. Feb 13 20:15:17.520833 systemd-networkd[1367]: eth0: Configuring with /run/systemd/network/10-9a:65:b8:9a:4f:49.network. Feb 13 20:15:17.523384 systemd-networkd[1367]: eth0: Link UP Feb 13 20:15:17.523542 systemd-networkd[1367]: eth0: Gained carrier Feb 13 20:15:17.543953 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Feb 13 20:15:17.557105 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Feb 13 20:15:17.578823 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Feb 13 20:15:17.601839 kernel: ISO 9660 Extensions: RRIP_1991A Feb 13 20:15:17.610563 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Feb 13 20:15:17.617019 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Feb 13 20:15:17.617507 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Feb 13 20:15:17.628863 kernel: ACPI: button: Power Button [PWRF] Feb 13 20:15:17.646688 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Feb 13 20:15:17.758704 kernel: mousedev: PS/2 mouse device common for all mice Feb 13 20:15:17.765397 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:17.775197 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Feb 13 20:15:17.775319 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Feb 13 20:15:17.787355 kernel: Console: switching to colour dummy device 80x25 Feb 13 20:15:17.787502 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Feb 13 20:15:17.787558 kernel: [drm] features: -context_init Feb 13 20:15:17.810822 kernel: [drm] number of scanouts: 1 Feb 13 20:15:17.810977 kernel: [drm] number of cap sets: 0 Feb 13 20:15:17.814683 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Feb 13 20:15:17.836287 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Feb 13 20:15:17.836425 kernel: Console: switching to colour frame buffer device 128x48 Feb 13 20:15:17.855136 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Feb 13 20:15:17.859456 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 13 20:15:17.860302 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:17.884228 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Feb 13 20:15:17.946501 kernel: EDAC MC: Ver: 3.0.0 Feb 13 20:15:17.974003 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Feb 13 20:15:17.989190 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Feb 13 20:15:18.012000 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Feb 13 20:15:18.024698 lvm[1427]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:15:18.071455 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Feb 13 20:15:18.074848 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Feb 13 20:15:18.075066 systemd[1]: Reached target sysinit.target - System Initialization. Feb 13 20:15:18.075385 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Feb 13 20:15:18.075563 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Feb 13 20:15:18.076388 systemd[1]: Started logrotate.timer - Daily rotation of log files. Feb 13 20:15:18.077700 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Feb 13 20:15:18.077875 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Feb 13 20:15:18.077987 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 13 20:15:18.078028 systemd[1]: Reached target paths.target - Path Units. Feb 13 20:15:18.078120 systemd[1]: Reached target timers.target - Timer Units. Feb 13 20:15:18.082556 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Feb 13 20:15:18.086812 systemd[1]: Starting docker.socket - Docker Socket for the API... Feb 13 20:15:18.096855 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Feb 13 20:15:18.112434 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Feb 13 20:15:18.114159 systemd[1]: Listening on docker.socket - Docker Socket for the API. Feb 13 20:15:18.119932 systemd[1]: Reached target sockets.target - Socket Units. Feb 13 20:15:18.121061 systemd[1]: Reached target basic.target - Basic System. Feb 13 20:15:18.122691 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:15:18.122973 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Feb 13 20:15:18.150981 systemd[1]: Starting containerd.service - containerd container runtime... Feb 13 20:15:18.160821 lvm[1433]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 13 20:15:18.167109 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Feb 13 20:15:18.182118 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Feb 13 20:15:18.194981 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Feb 13 20:15:18.203208 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Feb 13 20:15:18.205839 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Feb 13 20:15:18.211291 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Feb 13 20:15:18.236147 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Feb 13 20:15:18.267106 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Feb 13 20:15:18.296639 jq[1437]: false Feb 13 20:15:18.290164 systemd[1]: Starting systemd-logind.service - User Login Management... Feb 13 20:15:18.318144 extend-filesystems[1438]: Found loop4 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found loop5 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found loop6 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found loop7 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found vda Feb 13 20:15:18.318144 extend-filesystems[1438]: Found vda1 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found vda2 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found vda3 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found usr Feb 13 20:15:18.318144 extend-filesystems[1438]: Found vda4 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found vda6 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found vda7 Feb 13 20:15:18.318144 extend-filesystems[1438]: Found vda9 Feb 13 20:15:18.318144 extend-filesystems[1438]: Checking size of /dev/vda9 Feb 13 20:15:18.510823 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Feb 13 20:15:18.294558 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 13 20:15:18.488241 dbus-daemon[1436]: [system] SELinux support is enabled Feb 13 20:15:18.511798 extend-filesystems[1438]: Resized partition /dev/vda9 Feb 13 20:15:18.520558 coreos-metadata[1435]: Feb 13 20:15:18.487 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:15:18.520558 coreos-metadata[1435]: Feb 13 20:15:18.514 INFO Fetch successful Feb 13 20:15:18.299196 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 13 20:15:18.521407 extend-filesystems[1461]: resize2fs 1.47.1 (20-May-2024) Feb 13 20:15:18.307224 systemd[1]: Starting update-engine.service - Update Engine... Feb 13 20:15:18.327955 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Feb 13 20:15:18.534900 jq[1450]: true Feb 13 20:15:18.341898 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Feb 13 20:15:18.365736 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 13 20:15:18.366846 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Feb 13 20:15:18.367617 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 13 20:15:18.369167 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Feb 13 20:15:18.409502 systemd[1]: motdgen.service: Deactivated successfully. Feb 13 20:15:18.410519 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Feb 13 20:15:18.463628 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Feb 13 20:15:18.499351 systemd[1]: Started dbus.service - D-Bus System Message Bus. Feb 13 20:15:18.524282 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 13 20:15:18.524340 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Feb 13 20:15:18.526636 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 13 20:15:18.526844 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Feb 13 20:15:18.526883 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Feb 13 20:15:18.544474 systemd-logind[1444]: New seat seat0. Feb 13 20:15:18.550297 systemd-logind[1444]: Watching system buttons on /dev/input/event1 (Power Button) Feb 13 20:15:18.550332 systemd-logind[1444]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 13 20:15:18.551002 systemd[1]: Started systemd-logind.service - User Login Management. Feb 13 20:15:18.584850 update_engine[1448]: I20250213 20:15:18.584625 1448 main.cc:92] Flatcar Update Engine starting Feb 13 20:15:18.596774 jq[1469]: true Feb 13 20:15:18.599181 (ntainerd)[1470]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Feb 13 20:15:18.615413 systemd[1]: Started update-engine.service - Update Engine. Feb 13 20:15:18.625913 update_engine[1448]: I20250213 20:15:18.622371 1448 update_check_scheduler.cc:74] Next update check in 3m20s Feb 13 20:15:18.630411 systemd[1]: Started locksmithd.service - Cluster reboot manager. Feb 13 20:15:18.656005 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (1375) Feb 13 20:15:18.700473 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Feb 13 20:15:18.709034 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Feb 13 20:15:18.736870 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Feb 13 20:15:18.772004 extend-filesystems[1461]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Feb 13 20:15:18.772004 extend-filesystems[1461]: old_desc_blocks = 1, new_desc_blocks = 8 Feb 13 20:15:18.772004 extend-filesystems[1461]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Feb 13 20:15:18.771998 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 13 20:15:18.790793 extend-filesystems[1438]: Resized filesystem in /dev/vda9 Feb 13 20:15:18.790793 extend-filesystems[1438]: Found vdb Feb 13 20:15:18.773427 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Feb 13 20:15:18.851702 bash[1497]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:15:18.855282 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Feb 13 20:15:18.871443 systemd[1]: Starting sshkeys.service... Feb 13 20:15:18.940700 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Feb 13 20:15:18.957536 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Feb 13 20:15:18.997144 locksmithd[1477]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 13 20:15:19.084251 coreos-metadata[1504]: Feb 13 20:15:19.084 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Feb 13 20:15:19.096985 coreos-metadata[1504]: Feb 13 20:15:19.096 INFO Fetch successful Feb 13 20:15:19.146844 unknown[1504]: wrote ssh authorized keys file for user: core Feb 13 20:15:19.195077 update-ssh-keys[1512]: Updated "/home/core/.ssh/authorized_keys" Feb 13 20:15:19.198062 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Feb 13 20:15:19.201533 sshd_keygen[1465]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 13 20:15:19.204154 systemd[1]: Finished sshkeys.service. Feb 13 20:15:19.232737 containerd[1470]: time="2025-02-13T20:15:19.231893786Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Feb 13 20:15:19.250214 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Feb 13 20:15:19.269911 systemd[1]: Starting issuegen.service - Generate /run/issue... Feb 13 20:15:19.281939 systemd[1]: Started sshd@0-147.182.243.214:22-147.75.109.163:42582.service - OpenSSH per-connection server daemon (147.75.109.163:42582). Feb 13 20:15:19.294584 containerd[1470]: time="2025-02-13T20:15:19.293764806Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:19.302871 containerd[1470]: time="2025-02-13T20:15:19.301048182Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.74-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:19.302871 containerd[1470]: time="2025-02-13T20:15:19.301116272Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 13 20:15:19.302871 containerd[1470]: time="2025-02-13T20:15:19.301145594Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 13 20:15:19.302871 containerd[1470]: time="2025-02-13T20:15:19.301405555Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Feb 13 20:15:19.302871 containerd[1470]: time="2025-02-13T20:15:19.301436165Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:19.302871 containerd[1470]: time="2025-02-13T20:15:19.301579324Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:19.302871 containerd[1470]: time="2025-02-13T20:15:19.301616049Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:19.303600 containerd[1470]: time="2025-02-13T20:15:19.303559595Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:19.303854 systemd[1]: issuegen.service: Deactivated successfully. Feb 13 20:15:19.303972 containerd[1470]: time="2025-02-13T20:15:19.303837938Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:19.304108 containerd[1470]: time="2025-02-13T20:15:19.304076071Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:19.304209 systemd[1]: Finished issuegen.service - Generate /run/issue. Feb 13 20:15:19.306349 containerd[1470]: time="2025-02-13T20:15:19.305484197Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:19.306349 containerd[1470]: time="2025-02-13T20:15:19.305830826Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:19.306349 containerd[1470]: time="2025-02-13T20:15:19.306273383Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 13 20:15:19.307694 containerd[1470]: time="2025-02-13T20:15:19.306849186Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 13 20:15:19.307694 containerd[1470]: time="2025-02-13T20:15:19.306875838Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 13 20:15:19.307694 containerd[1470]: time="2025-02-13T20:15:19.307042995Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 13 20:15:19.307694 containerd[1470]: time="2025-02-13T20:15:19.307135159Z" level=info msg="metadata content store policy set" policy=shared Feb 13 20:15:19.317349 containerd[1470]: time="2025-02-13T20:15:19.317258297Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 13 20:15:19.320662 containerd[1470]: time="2025-02-13T20:15:19.317821954Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 13 20:15:19.320662 containerd[1470]: time="2025-02-13T20:15:19.319903369Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Feb 13 20:15:19.320662 containerd[1470]: time="2025-02-13T20:15:19.320025676Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Feb 13 20:15:19.320662 containerd[1470]: time="2025-02-13T20:15:19.320120335Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 13 20:15:19.320662 containerd[1470]: time="2025-02-13T20:15:19.320528847Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 13 20:15:19.319943 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Feb 13 20:15:19.321753 containerd[1470]: time="2025-02-13T20:15:19.321678411Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 13 20:15:19.322266 containerd[1470]: time="2025-02-13T20:15:19.322227154Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Feb 13 20:15:19.322789 containerd[1470]: time="2025-02-13T20:15:19.322748387Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Feb 13 20:15:19.322988 containerd[1470]: time="2025-02-13T20:15:19.322958948Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Feb 13 20:15:19.323107 containerd[1470]: time="2025-02-13T20:15:19.323082647Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 13 20:15:19.323567 containerd[1470]: time="2025-02-13T20:15:19.323466255Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 13 20:15:19.323720 containerd[1470]: time="2025-02-13T20:15:19.323505728Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 13 20:15:19.323859 containerd[1470]: time="2025-02-13T20:15:19.323835018Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 13 20:15:19.323950 containerd[1470]: time="2025-02-13T20:15:19.323931181Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.324719668Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.324817799Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.324843315Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.324920889Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.324952125Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.324972676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.324996297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.325016683Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.325039366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.325058352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.325077571Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.325099494Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.325125785Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.325433 containerd[1470]: time="2025-02-13T20:15:19.325143835Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.326043 containerd[1470]: time="2025-02-13T20:15:19.325163614Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.326043 containerd[1470]: time="2025-02-13T20:15:19.325186676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.326043 containerd[1470]: time="2025-02-13T20:15:19.325245192Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Feb 13 20:15:19.326043 containerd[1470]: time="2025-02-13T20:15:19.325293177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.326043 containerd[1470]: time="2025-02-13T20:15:19.325311847Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.326043 containerd[1470]: time="2025-02-13T20:15:19.325341181Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 13 20:15:19.329077 containerd[1470]: time="2025-02-13T20:15:19.326718524Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 13 20:15:19.329077 containerd[1470]: time="2025-02-13T20:15:19.326928113Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Feb 13 20:15:19.329077 containerd[1470]: time="2025-02-13T20:15:19.326963198Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 13 20:15:19.329077 containerd[1470]: time="2025-02-13T20:15:19.326985234Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Feb 13 20:15:19.329077 containerd[1470]: time="2025-02-13T20:15:19.327002244Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.329077 containerd[1470]: time="2025-02-13T20:15:19.327047942Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Feb 13 20:15:19.329077 containerd[1470]: time="2025-02-13T20:15:19.327078589Z" level=info msg="NRI interface is disabled by configuration." Feb 13 20:15:19.329077 containerd[1470]: time="2025-02-13T20:15:19.327099177Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 13 20:15:19.330214 containerd[1470]: time="2025-02-13T20:15:19.329976260Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 13 20:15:19.330500 containerd[1470]: time="2025-02-13T20:15:19.330210409Z" level=info msg="Connect containerd service" Feb 13 20:15:19.330500 containerd[1470]: time="2025-02-13T20:15:19.330304293Z" level=info msg="using legacy CRI server" Feb 13 20:15:19.330500 containerd[1470]: time="2025-02-13T20:15:19.330318325Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Feb 13 20:15:19.330638 containerd[1470]: time="2025-02-13T20:15:19.330512184Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 13 20:15:19.332473 containerd[1470]: time="2025-02-13T20:15:19.332249094Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:15:19.333773 containerd[1470]: time="2025-02-13T20:15:19.332744767Z" level=info msg="Start subscribing containerd event" Feb 13 20:15:19.333773 containerd[1470]: time="2025-02-13T20:15:19.332850001Z" level=info msg="Start recovering state" Feb 13 20:15:19.333773 containerd[1470]: time="2025-02-13T20:15:19.333009274Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 13 20:15:19.334200 containerd[1470]: time="2025-02-13T20:15:19.333986121Z" level=info msg="Start event monitor" Feb 13 20:15:19.334200 containerd[1470]: time="2025-02-13T20:15:19.334050005Z" level=info msg="Start snapshots syncer" Feb 13 20:15:19.334200 containerd[1470]: time="2025-02-13T20:15:19.334067554Z" level=info msg="Start cni network conf syncer for default" Feb 13 20:15:19.334200 containerd[1470]: time="2025-02-13T20:15:19.334079996Z" level=info msg="Start streaming server" Feb 13 20:15:19.335696 containerd[1470]: time="2025-02-13T20:15:19.335011808Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 13 20:15:19.336697 containerd[1470]: time="2025-02-13T20:15:19.335931033Z" level=info msg="containerd successfully booted in 0.105706s" Feb 13 20:15:19.336252 systemd[1]: Started containerd.service - containerd container runtime. Feb 13 20:15:19.365013 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Feb 13 20:15:19.378413 systemd[1]: Started getty@tty1.service - Getty on tty1. Feb 13 20:15:19.392785 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Feb 13 20:15:19.397000 systemd[1]: Reached target getty.target - Login Prompts. Feb 13 20:15:19.420994 sshd[1527]: Accepted publickey for core from 147.75.109.163 port 42582 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:19.425840 sshd[1527]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:19.435843 systemd-networkd[1367]: eth1: Gained IPv6LL Feb 13 20:15:19.436562 systemd-networkd[1367]: eth0: Gained IPv6LL Feb 13 20:15:19.440924 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Feb 13 20:15:19.450174 systemd-logind[1444]: New session 1 of user core. Feb 13 20:15:19.453392 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Feb 13 20:15:19.458466 systemd[1]: Reached target network-online.target - Network is Online. Feb 13 20:15:19.467235 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:19.478223 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Feb 13 20:15:19.484268 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Feb 13 20:15:19.540626 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Feb 13 20:15:19.545482 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Feb 13 20:15:19.562557 systemd[1]: Starting user@500.service - User Manager for UID 500... Feb 13 20:15:19.584477 (systemd)[1552]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 13 20:15:19.747374 systemd[1552]: Queued start job for default target default.target. Feb 13 20:15:19.757330 systemd[1552]: Created slice app.slice - User Application Slice. Feb 13 20:15:19.757682 systemd[1552]: Reached target paths.target - Paths. Feb 13 20:15:19.757711 systemd[1552]: Reached target timers.target - Timers. Feb 13 20:15:19.762935 systemd[1552]: Starting dbus.socket - D-Bus User Message Bus Socket... Feb 13 20:15:19.784208 systemd[1552]: Listening on dbus.socket - D-Bus User Message Bus Socket. Feb 13 20:15:19.785316 systemd[1552]: Reached target sockets.target - Sockets. Feb 13 20:15:19.785351 systemd[1552]: Reached target basic.target - Basic System. Feb 13 20:15:19.785474 systemd[1552]: Reached target default.target - Main User Target. Feb 13 20:15:19.785573 systemd[1552]: Startup finished in 185ms. Feb 13 20:15:19.786637 systemd[1]: Started user@500.service - User Manager for UID 500. Feb 13 20:15:19.803098 systemd[1]: Started session-1.scope - Session 1 of User core. Feb 13 20:15:19.900366 systemd[1]: Started sshd@1-147.182.243.214:22-147.75.109.163:33286.service - OpenSSH per-connection server daemon (147.75.109.163:33286). Feb 13 20:15:19.984821 sshd[1563]: Accepted publickey for core from 147.75.109.163 port 33286 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:19.987928 sshd[1563]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:19.999587 systemd-logind[1444]: New session 2 of user core. Feb 13 20:15:20.004230 systemd[1]: Started session-2.scope - Session 2 of User core. Feb 13 20:15:20.082943 sshd[1563]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:20.095536 systemd[1]: sshd@1-147.182.243.214:22-147.75.109.163:33286.service: Deactivated successfully. Feb 13 20:15:20.099794 systemd[1]: session-2.scope: Deactivated successfully. Feb 13 20:15:20.104051 systemd-logind[1444]: Session 2 logged out. Waiting for processes to exit. Feb 13 20:15:20.114313 systemd[1]: Started sshd@2-147.182.243.214:22-147.75.109.163:33294.service - OpenSSH per-connection server daemon (147.75.109.163:33294). Feb 13 20:15:20.121019 systemd-logind[1444]: Removed session 2. Feb 13 20:15:20.175998 sshd[1570]: Accepted publickey for core from 147.75.109.163 port 33294 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:20.179197 sshd[1570]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:20.188601 systemd-logind[1444]: New session 3 of user core. Feb 13 20:15:20.195390 systemd[1]: Started session-3.scope - Session 3 of User core. Feb 13 20:15:20.272025 sshd[1570]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:20.281177 systemd[1]: sshd@2-147.182.243.214:22-147.75.109.163:33294.service: Deactivated successfully. Feb 13 20:15:20.284733 systemd[1]: session-3.scope: Deactivated successfully. Feb 13 20:15:20.286451 systemd-logind[1444]: Session 3 logged out. Waiting for processes to exit. Feb 13 20:15:20.289250 systemd-logind[1444]: Removed session 3. Feb 13 20:15:20.912301 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:20.916112 systemd[1]: Reached target multi-user.target - Multi-User System. Feb 13 20:15:20.919160 systemd[1]: Startup finished in 1.303s (kernel) + 6.000s (initrd) + 7.242s (userspace) = 14.546s. Feb 13 20:15:20.921957 (kubelet)[1581]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:15:21.830879 kubelet[1581]: E0213 20:15:21.830801 1581 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:15:21.834820 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:15:21.835057 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:15:21.835622 systemd[1]: kubelet.service: Consumed 1.511s CPU time. Feb 13 20:15:24.366457 systemd-timesyncd[1347]: Contacted time server 71.123.46.186:123 (1.flatcar.pool.ntp.org). Feb 13 20:15:24.366458 systemd-resolved[1329]: Clock change detected. Flushing caches. Feb 13 20:15:24.366540 systemd-timesyncd[1347]: Initial clock synchronization to Thu 2025-02-13 20:15:24.366158 UTC. Feb 13 20:15:30.746905 systemd[1]: Started sshd@3-147.182.243.214:22-147.75.109.163:33006.service - OpenSSH per-connection server daemon (147.75.109.163:33006). Feb 13 20:15:30.807076 sshd[1594]: Accepted publickey for core from 147.75.109.163 port 33006 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:30.809695 sshd[1594]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:30.818765 systemd-logind[1444]: New session 4 of user core. Feb 13 20:15:30.832529 systemd[1]: Started session-4.scope - Session 4 of User core. Feb 13 20:15:30.898839 sshd[1594]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:30.916872 systemd[1]: sshd@3-147.182.243.214:22-147.75.109.163:33006.service: Deactivated successfully. Feb 13 20:15:30.919422 systemd[1]: session-4.scope: Deactivated successfully. Feb 13 20:15:30.921493 systemd-logind[1444]: Session 4 logged out. Waiting for processes to exit. Feb 13 20:15:30.927598 systemd[1]: Started sshd@4-147.182.243.214:22-147.75.109.163:33018.service - OpenSSH per-connection server daemon (147.75.109.163:33018). Feb 13 20:15:30.929799 systemd-logind[1444]: Removed session 4. Feb 13 20:15:30.986418 sshd[1601]: Accepted publickey for core from 147.75.109.163 port 33018 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:30.988416 sshd[1601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:30.995524 systemd-logind[1444]: New session 5 of user core. Feb 13 20:15:31.009482 systemd[1]: Started session-5.scope - Session 5 of User core. Feb 13 20:15:31.092096 sshd[1601]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:31.109954 systemd[1]: sshd@4-147.182.243.214:22-147.75.109.163:33018.service: Deactivated successfully. Feb 13 20:15:31.112845 systemd[1]: session-5.scope: Deactivated successfully. Feb 13 20:15:31.113841 systemd-logind[1444]: Session 5 logged out. Waiting for processes to exit. Feb 13 20:15:31.121733 systemd[1]: Started sshd@5-147.182.243.214:22-147.75.109.163:33032.service - OpenSSH per-connection server daemon (147.75.109.163:33032). Feb 13 20:15:31.124259 systemd-logind[1444]: Removed session 5. Feb 13 20:15:31.179232 sshd[1608]: Accepted publickey for core from 147.75.109.163 port 33032 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:31.181852 sshd[1608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:31.189682 systemd-logind[1444]: New session 6 of user core. Feb 13 20:15:31.199447 systemd[1]: Started session-6.scope - Session 6 of User core. Feb 13 20:15:31.271417 sshd[1608]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:31.280792 systemd[1]: sshd@5-147.182.243.214:22-147.75.109.163:33032.service: Deactivated successfully. Feb 13 20:15:31.283769 systemd[1]: session-6.scope: Deactivated successfully. Feb 13 20:15:31.287431 systemd-logind[1444]: Session 6 logged out. Waiting for processes to exit. Feb 13 20:15:31.292861 systemd[1]: Started sshd@6-147.182.243.214:22-147.75.109.163:33038.service - OpenSSH per-connection server daemon (147.75.109.163:33038). Feb 13 20:15:31.294908 systemd-logind[1444]: Removed session 6. Feb 13 20:15:31.340547 sshd[1615]: Accepted publickey for core from 147.75.109.163 port 33038 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:31.342753 sshd[1615]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:31.350504 systemd-logind[1444]: New session 7 of user core. Feb 13 20:15:31.358357 systemd[1]: Started session-7.scope - Session 7 of User core. Feb 13 20:15:31.434669 sudo[1618]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Feb 13 20:15:31.435237 sudo[1618]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:31.452134 sudo[1618]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:31.457594 sshd[1615]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:31.472316 systemd[1]: sshd@6-147.182.243.214:22-147.75.109.163:33038.service: Deactivated successfully. Feb 13 20:15:31.475773 systemd[1]: session-7.scope: Deactivated successfully. Feb 13 20:15:31.479320 systemd-logind[1444]: Session 7 logged out. Waiting for processes to exit. Feb 13 20:15:31.484533 systemd[1]: Started sshd@7-147.182.243.214:22-147.75.109.163:33052.service - OpenSSH per-connection server daemon (147.75.109.163:33052). Feb 13 20:15:31.487731 systemd-logind[1444]: Removed session 7. Feb 13 20:15:31.543123 sshd[1623]: Accepted publickey for core from 147.75.109.163 port 33052 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:31.546143 sshd[1623]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:31.554642 systemd-logind[1444]: New session 8 of user core. Feb 13 20:15:31.560415 systemd[1]: Started session-8.scope - Session 8 of User core. Feb 13 20:15:31.626141 sudo[1627]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Feb 13 20:15:31.626581 sudo[1627]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:31.632391 sudo[1627]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:31.641587 sudo[1626]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Feb 13 20:15:31.642531 sudo[1626]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:31.666769 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Feb 13 20:15:31.669159 auditctl[1630]: No rules Feb 13 20:15:31.669795 systemd[1]: audit-rules.service: Deactivated successfully. Feb 13 20:15:31.670101 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Feb 13 20:15:31.674217 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Feb 13 20:15:31.722940 augenrules[1648]: No rules Feb 13 20:15:31.725374 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Feb 13 20:15:31.727675 sudo[1626]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:31.731761 sshd[1623]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:31.740886 systemd[1]: sshd@7-147.182.243.214:22-147.75.109.163:33052.service: Deactivated successfully. Feb 13 20:15:31.743809 systemd[1]: session-8.scope: Deactivated successfully. Feb 13 20:15:31.746534 systemd-logind[1444]: Session 8 logged out. Waiting for processes to exit. Feb 13 20:15:31.751541 systemd[1]: Started sshd@8-147.182.243.214:22-147.75.109.163:33062.service - OpenSSH per-connection server daemon (147.75.109.163:33062). Feb 13 20:15:31.753728 systemd-logind[1444]: Removed session 8. Feb 13 20:15:31.807857 sshd[1656]: Accepted publickey for core from 147.75.109.163 port 33062 ssh2: RSA SHA256:ogQi+1D4BqELHLhQXBcvBaiNphQ6EARorU8jzxcV0O4 Feb 13 20:15:31.809464 sshd[1656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Feb 13 20:15:31.819255 systemd-logind[1444]: New session 9 of user core. Feb 13 20:15:31.821457 systemd[1]: Started session-9.scope - Session 9 of User core. Feb 13 20:15:31.884352 sudo[1659]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 13 20:15:31.884727 sudo[1659]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Feb 13 20:15:32.320938 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 13 20:15:32.330055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:32.540506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:32.552721 (kubelet)[1691]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Feb 13 20:15:32.621075 kubelet[1691]: E0213 20:15:32.619810 1691 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 13 20:15:32.623268 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 13 20:15:32.623453 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 13 20:15:32.706553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:32.722715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:32.765840 systemd[1]: Reloading requested from client PID 1707 ('systemctl') (unit session-9.scope)... Feb 13 20:15:32.765864 systemd[1]: Reloading... Feb 13 20:15:32.964858 zram_generator::config[1757]: No configuration found. Feb 13 20:15:33.145450 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 13 20:15:33.242458 systemd[1]: Reloading finished in 476 ms. Feb 13 20:15:33.302681 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Feb 13 20:15:33.302838 systemd[1]: kubelet.service: Failed with result 'signal'. Feb 13 20:15:33.303305 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:33.312509 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Feb 13 20:15:33.494361 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Feb 13 20:15:33.506825 (kubelet)[1798]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Feb 13 20:15:33.573233 kubelet[1798]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:33.573233 kubelet[1798]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Feb 13 20:15:33.573233 kubelet[1798]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 13 20:15:33.573991 kubelet[1798]: I0213 20:15:33.573417 1798 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 13 20:15:34.060684 kubelet[1798]: I0213 20:15:34.060596 1798 server.go:520] "Kubelet version" kubeletVersion="v1.32.0" Feb 13 20:15:34.062015 kubelet[1798]: I0213 20:15:34.060909 1798 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 13 20:15:34.062015 kubelet[1798]: I0213 20:15:34.061350 1798 server.go:954] "Client rotation is on, will bootstrap in background" Feb 13 20:15:34.088512 kubelet[1798]: I0213 20:15:34.088444 1798 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 13 20:15:34.101872 kubelet[1798]: E0213 20:15:34.101781 1798 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Feb 13 20:15:34.101872 kubelet[1798]: I0213 20:15:34.101840 1798 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Feb 13 20:15:34.105807 kubelet[1798]: I0213 20:15:34.105331 1798 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 13 20:15:34.105807 kubelet[1798]: I0213 20:15:34.105645 1798 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 13 20:15:34.106116 kubelet[1798]: I0213 20:15:34.105683 1798 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"147.182.243.214","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Feb 13 20:15:34.106116 kubelet[1798]: I0213 20:15:34.105863 1798 topology_manager.go:138] "Creating topology manager with none policy" Feb 13 20:15:34.106116 kubelet[1798]: I0213 20:15:34.105876 1798 container_manager_linux.go:304] "Creating device plugin manager" Feb 13 20:15:34.107011 kubelet[1798]: I0213 20:15:34.106586 1798 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:34.112019 kubelet[1798]: I0213 20:15:34.111498 1798 kubelet.go:446] "Attempting to sync node with API server" Feb 13 20:15:34.112019 kubelet[1798]: I0213 20:15:34.111555 1798 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 13 20:15:34.112019 kubelet[1798]: I0213 20:15:34.111587 1798 kubelet.go:352] "Adding apiserver pod source" Feb 13 20:15:34.112019 kubelet[1798]: I0213 20:15:34.111599 1798 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 13 20:15:34.117557 kubelet[1798]: E0213 20:15:34.117063 1798 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:34.117557 kubelet[1798]: E0213 20:15:34.117126 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:34.118882 kubelet[1798]: I0213 20:15:34.118831 1798 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Feb 13 20:15:34.120011 kubelet[1798]: I0213 20:15:34.119343 1798 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Feb 13 20:15:34.120011 kubelet[1798]: W0213 20:15:34.119432 1798 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 13 20:15:34.121927 kubelet[1798]: I0213 20:15:34.121571 1798 watchdog_linux.go:99] "Systemd watchdog is not enabled" Feb 13 20:15:34.121927 kubelet[1798]: I0213 20:15:34.121637 1798 server.go:1287] "Started kubelet" Feb 13 20:15:34.122908 kubelet[1798]: I0213 20:15:34.122367 1798 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Feb 13 20:15:34.123771 kubelet[1798]: I0213 20:15:34.123715 1798 server.go:490] "Adding debug handlers to kubelet server" Feb 13 20:15:34.125227 kubelet[1798]: I0213 20:15:34.125157 1798 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Feb 13 20:15:34.125857 kubelet[1798]: I0213 20:15:34.125833 1798 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 13 20:15:34.129645 kubelet[1798]: I0213 20:15:34.128363 1798 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 13 20:15:34.132343 kubelet[1798]: E0213 20:15:34.129389 1798 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{147.182.243.214.1823ddc875050272 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:147.182.243.214,UID:147.182.243.214,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:147.182.243.214,},FirstTimestamp:2025-02-13 20:15:34.121603698 +0000 UTC m=+0.608154967,LastTimestamp:2025-02-13 20:15:34.121603698 +0000 UTC m=+0.608154967,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:147.182.243.214,}" Feb 13 20:15:34.133447 kubelet[1798]: W0213 20:15:34.133381 1798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 13 20:15:34.133680 kubelet[1798]: E0213 20:15:34.133660 1798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 20:15:34.135822 kubelet[1798]: I0213 20:15:34.135782 1798 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Feb 13 20:15:34.141890 kubelet[1798]: W0213 20:15:34.141836 1798 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "147.182.243.214" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 13 20:15:34.142239 kubelet[1798]: E0213 20:15:34.142204 1798 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"147.182.243.214\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" Feb 13 20:15:34.143115 kubelet[1798]: I0213 20:15:34.143073 1798 volume_manager.go:297] "Starting Kubelet Volume Manager" Feb 13 20:15:34.143471 kubelet[1798]: E0213 20:15:34.143450 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:34.145018 kubelet[1798]: I0213 20:15:34.144946 1798 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Feb 13 20:15:34.145129 kubelet[1798]: I0213 20:15:34.145086 1798 reconciler.go:26] "Reconciler: start to sync state" Feb 13 20:15:34.147028 kubelet[1798]: I0213 20:15:34.146694 1798 factory.go:221] Registration of the systemd container factory successfully Feb 13 20:15:34.147028 kubelet[1798]: I0213 20:15:34.146828 1798 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Feb 13 20:15:34.149806 kubelet[1798]: E0213 20:15:34.149735 1798 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"147.182.243.214\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 13 20:15:34.150784 kubelet[1798]: E0213 20:15:34.150747 1798 kubelet.go:1561] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 13 20:15:34.152617 kubelet[1798]: I0213 20:15:34.152586 1798 factory.go:221] Registration of the containerd container factory successfully Feb 13 20:15:34.195103 kubelet[1798]: I0213 20:15:34.195063 1798 cpu_manager.go:221] "Starting CPU manager" policy="none" Feb 13 20:15:34.195320 kubelet[1798]: I0213 20:15:34.195305 1798 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Feb 13 20:15:34.195413 kubelet[1798]: I0213 20:15:34.195402 1798 state_mem.go:36] "Initialized new in-memory state store" Feb 13 20:15:34.198228 kubelet[1798]: I0213 20:15:34.198188 1798 policy_none.go:49] "None policy: Start" Feb 13 20:15:34.198448 kubelet[1798]: I0213 20:15:34.198431 1798 memory_manager.go:186] "Starting memorymanager" policy="None" Feb 13 20:15:34.198568 kubelet[1798]: I0213 20:15:34.198555 1798 state_mem.go:35] "Initializing new in-memory state store" Feb 13 20:15:34.209291 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Feb 13 20:15:34.225054 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Feb 13 20:15:34.232700 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Feb 13 20:15:34.244942 kubelet[1798]: E0213 20:15:34.244161 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:34.248719 kubelet[1798]: I0213 20:15:34.246908 1798 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 13 20:15:34.248719 kubelet[1798]: I0213 20:15:34.247276 1798 eviction_manager.go:189] "Eviction manager: starting control loop" Feb 13 20:15:34.248719 kubelet[1798]: I0213 20:15:34.247298 1798 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Feb 13 20:15:34.249197 kubelet[1798]: I0213 20:15:34.249166 1798 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 13 20:15:34.256113 kubelet[1798]: E0213 20:15:34.255416 1798 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Feb 13 20:15:34.256113 kubelet[1798]: E0213 20:15:34.255483 1798 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"147.182.243.214\" not found" Feb 13 20:15:34.259589 kubelet[1798]: I0213 20:15:34.259541 1798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 13 20:15:34.262528 kubelet[1798]: I0213 20:15:34.262477 1798 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 13 20:15:34.262789 kubelet[1798]: I0213 20:15:34.262769 1798 status_manager.go:227] "Starting to sync pod status with apiserver" Feb 13 20:15:34.262930 kubelet[1798]: I0213 20:15:34.262916 1798 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Feb 13 20:15:34.263053 kubelet[1798]: I0213 20:15:34.263040 1798 kubelet.go:2388] "Starting kubelet main sync loop" Feb 13 20:15:34.263349 kubelet[1798]: E0213 20:15:34.263326 1798 kubelet.go:2412] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 13 20:15:34.354348 kubelet[1798]: I0213 20:15:34.354043 1798 kubelet_node_status.go:76] "Attempting to register node" node="147.182.243.214" Feb 13 20:15:34.357415 kubelet[1798]: E0213 20:15:34.357205 1798 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"147.182.243.214\" not found" node="147.182.243.214" Feb 13 20:15:34.360832 kubelet[1798]: I0213 20:15:34.360628 1798 kubelet_node_status.go:79] "Successfully registered node" node="147.182.243.214" Feb 13 20:15:34.360832 kubelet[1798]: E0213 20:15:34.360678 1798 kubelet_node_status.go:549] "Error updating node status, will retry" err="error getting node \"147.182.243.214\": node \"147.182.243.214\" not found" Feb 13 20:15:34.367260 kubelet[1798]: E0213 20:15:34.367199 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:34.467424 kubelet[1798]: E0213 20:15:34.467336 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:34.542548 sudo[1659]: pam_unix(sudo:session): session closed for user root Feb 13 20:15:34.546926 sshd[1656]: pam_unix(sshd:session): session closed for user core Feb 13 20:15:34.554325 systemd[1]: sshd@8-147.182.243.214:22-147.75.109.163:33062.service: Deactivated successfully. Feb 13 20:15:34.557677 systemd[1]: session-9.scope: Deactivated successfully. Feb 13 20:15:34.559264 systemd-logind[1444]: Session 9 logged out. Waiting for processes to exit. Feb 13 20:15:34.561304 systemd-logind[1444]: Removed session 9. Feb 13 20:15:34.568366 kubelet[1798]: E0213 20:15:34.568288 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:34.669304 kubelet[1798]: E0213 20:15:34.669054 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:34.770177 kubelet[1798]: E0213 20:15:34.770093 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:34.870365 kubelet[1798]: E0213 20:15:34.870273 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:34.971529 kubelet[1798]: E0213 20:15:34.971285 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:35.065649 kubelet[1798]: I0213 20:15:35.065544 1798 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 13 20:15:35.065854 kubelet[1798]: W0213 20:15:35.065802 1798 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:15:35.065912 kubelet[1798]: W0213 20:15:35.065872 1798 reflector.go:492] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Feb 13 20:15:35.072345 kubelet[1798]: E0213 20:15:35.072236 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:35.118196 kubelet[1798]: E0213 20:15:35.118096 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:35.173236 kubelet[1798]: E0213 20:15:35.173158 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:35.274587 kubelet[1798]: E0213 20:15:35.274361 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:35.375685 kubelet[1798]: E0213 20:15:35.375574 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:35.476812 kubelet[1798]: E0213 20:15:35.476705 1798 kubelet_node_status.go:467] "Error getting the current node from lister" err="node \"147.182.243.214\" not found" Feb 13 20:15:35.578761 kubelet[1798]: I0213 20:15:35.578707 1798 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 13 20:15:35.579260 containerd[1470]: time="2025-02-13T20:15:35.579198881Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 13 20:15:35.580343 kubelet[1798]: I0213 20:15:35.579478 1798 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 13 20:15:36.118717 kubelet[1798]: E0213 20:15:36.118622 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:36.118717 kubelet[1798]: I0213 20:15:36.118640 1798 apiserver.go:52] "Watching apiserver" Feb 13 20:15:36.124130 kubelet[1798]: E0213 20:15:36.123140 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cn6j8" podUID="b7ff3472-c8b7-4078-b230-5a0559383bf9" Feb 13 20:15:36.141065 systemd[1]: Created slice kubepods-besteffort-pod9b469396_d36c_431a_ba11_4e77366c4686.slice - libcontainer container kubepods-besteffort-pod9b469396_d36c_431a_ba11_4e77366c4686.slice. Feb 13 20:15:36.146048 kubelet[1798]: I0213 20:15:36.145536 1798 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Feb 13 20:15:36.159852 kubelet[1798]: I0213 20:15:36.159788 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-cni-net-dir\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.159852 kubelet[1798]: I0213 20:15:36.159842 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b7ff3472-c8b7-4078-b230-5a0559383bf9-kubelet-dir\") pod \"csi-node-driver-cn6j8\" (UID: \"b7ff3472-c8b7-4078-b230-5a0559383bf9\") " pod="calico-system/csi-node-driver-cn6j8" Feb 13 20:15:36.160101 kubelet[1798]: I0213 20:15:36.159875 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/4fca6fc6-8073-4093-a09a-92a93b47ae8f-kube-proxy\") pod \"kube-proxy-bdwd5\" (UID: \"4fca6fc6-8073-4093-a09a-92a93b47ae8f\") " pod="kube-system/kube-proxy-bdwd5" Feb 13 20:15:36.160101 kubelet[1798]: I0213 20:15:36.159896 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/4fca6fc6-8073-4093-a09a-92a93b47ae8f-lib-modules\") pod \"kube-proxy-bdwd5\" (UID: \"4fca6fc6-8073-4093-a09a-92a93b47ae8f\") " pod="kube-system/kube-proxy-bdwd5" Feb 13 20:15:36.160101 kubelet[1798]: I0213 20:15:36.159922 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vhzlg\" (UniqueName: \"kubernetes.io/projected/4fca6fc6-8073-4093-a09a-92a93b47ae8f-kube-api-access-vhzlg\") pod \"kube-proxy-bdwd5\" (UID: \"4fca6fc6-8073-4093-a09a-92a93b47ae8f\") " pod="kube-system/kube-proxy-bdwd5" Feb 13 20:15:36.160101 kubelet[1798]: I0213 20:15:36.159941 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-policysync\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160101 kubelet[1798]: I0213 20:15:36.159956 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9b469396-d36c-431a-ba11-4e77366c4686-tigera-ca-bundle\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160326 kubelet[1798]: I0213 20:15:36.160126 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-cni-bin-dir\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160326 kubelet[1798]: I0213 20:15:36.160178 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-cni-log-dir\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160326 kubelet[1798]: I0213 20:15:36.160233 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-flexvol-driver-host\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160326 kubelet[1798]: I0213 20:15:36.160254 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cmhhr\" (UniqueName: \"kubernetes.io/projected/b7ff3472-c8b7-4078-b230-5a0559383bf9-kube-api-access-cmhhr\") pod \"csi-node-driver-cn6j8\" (UID: \"b7ff3472-c8b7-4078-b230-5a0559383bf9\") " pod="calico-system/csi-node-driver-cn6j8" Feb 13 20:15:36.160326 kubelet[1798]: I0213 20:15:36.160271 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/4fca6fc6-8073-4093-a09a-92a93b47ae8f-xtables-lock\") pod \"kube-proxy-bdwd5\" (UID: \"4fca6fc6-8073-4093-a09a-92a93b47ae8f\") " pod="kube-system/kube-proxy-bdwd5" Feb 13 20:15:36.160564 kubelet[1798]: I0213 20:15:36.160309 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-xtables-lock\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160564 kubelet[1798]: I0213 20:15:36.160337 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-var-run-calico\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160564 kubelet[1798]: I0213 20:15:36.160382 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-62mjw\" (UniqueName: \"kubernetes.io/projected/9b469396-d36c-431a-ba11-4e77366c4686-kube-api-access-62mjw\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160564 kubelet[1798]: I0213 20:15:36.160397 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b7ff3472-c8b7-4078-b230-5a0559383bf9-socket-dir\") pod \"csi-node-driver-cn6j8\" (UID: \"b7ff3472-c8b7-4078-b230-5a0559383bf9\") " pod="calico-system/csi-node-driver-cn6j8" Feb 13 20:15:36.160564 kubelet[1798]: I0213 20:15:36.160415 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b7ff3472-c8b7-4078-b230-5a0559383bf9-registration-dir\") pod \"csi-node-driver-cn6j8\" (UID: \"b7ff3472-c8b7-4078-b230-5a0559383bf9\") " pod="calico-system/csi-node-driver-cn6j8" Feb 13 20:15:36.160789 kubelet[1798]: I0213 20:15:36.160470 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-lib-modules\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160789 kubelet[1798]: I0213 20:15:36.160551 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9b469396-d36c-431a-ba11-4e77366c4686-node-certs\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160789 kubelet[1798]: I0213 20:15:36.160577 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9b469396-d36c-431a-ba11-4e77366c4686-var-lib-calico\") pod \"calico-node-c52cx\" (UID: \"9b469396-d36c-431a-ba11-4e77366c4686\") " pod="calico-system/calico-node-c52cx" Feb 13 20:15:36.160789 kubelet[1798]: I0213 20:15:36.160630 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b7ff3472-c8b7-4078-b230-5a0559383bf9-varrun\") pod \"csi-node-driver-cn6j8\" (UID: \"b7ff3472-c8b7-4078-b230-5a0559383bf9\") " pod="calico-system/csi-node-driver-cn6j8" Feb 13 20:15:36.163406 systemd[1]: Created slice kubepods-besteffort-pod4fca6fc6_8073_4093_a09a_92a93b47ae8f.slice - libcontainer container kubepods-besteffort-pod4fca6fc6_8073_4093_a09a_92a93b47ae8f.slice. Feb 13 20:15:36.268919 kubelet[1798]: E0213 20:15:36.268783 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.268919 kubelet[1798]: W0213 20:15:36.268870 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.269822 kubelet[1798]: E0213 20:15:36.269371 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.281100 kubelet[1798]: E0213 20:15:36.280792 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.281100 kubelet[1798]: W0213 20:15:36.280849 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.281100 kubelet[1798]: E0213 20:15:36.280886 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.287413 kubelet[1798]: E0213 20:15:36.286316 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.287413 kubelet[1798]: W0213 20:15:36.286462 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.287413 kubelet[1798]: E0213 20:15:36.286510 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.290951 kubelet[1798]: E0213 20:15:36.290368 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.297073 kubelet[1798]: W0213 20:15:36.294876 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.299354 kubelet[1798]: E0213 20:15:36.299284 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.300360 kubelet[1798]: E0213 20:15:36.299997 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.300360 kubelet[1798]: W0213 20:15:36.300029 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.300360 kubelet[1798]: E0213 20:15:36.300138 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.302765 kubelet[1798]: E0213 20:15:36.300667 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.302765 kubelet[1798]: W0213 20:15:36.300689 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.302765 kubelet[1798]: E0213 20:15:36.300787 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.302765 kubelet[1798]: E0213 20:15:36.301138 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.302765 kubelet[1798]: W0213 20:15:36.301157 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.302765 kubelet[1798]: E0213 20:15:36.301297 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.302765 kubelet[1798]: E0213 20:15:36.301561 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.302765 kubelet[1798]: W0213 20:15:36.301576 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.302765 kubelet[1798]: E0213 20:15:36.301811 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.302765 kubelet[1798]: E0213 20:15:36.302236 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.305712 kubelet[1798]: W0213 20:15:36.302254 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.305712 kubelet[1798]: E0213 20:15:36.302348 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.305712 kubelet[1798]: E0213 20:15:36.302675 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.305712 kubelet[1798]: W0213 20:15:36.302689 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.305712 kubelet[1798]: E0213 20:15:36.302886 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.305712 kubelet[1798]: E0213 20:15:36.303135 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.305712 kubelet[1798]: W0213 20:15:36.303150 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.305712 kubelet[1798]: E0213 20:15:36.303297 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.305712 kubelet[1798]: E0213 20:15:36.303491 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.305712 kubelet[1798]: W0213 20:15:36.303509 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.306213 kubelet[1798]: E0213 20:15:36.303531 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.306213 kubelet[1798]: E0213 20:15:36.303907 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.306213 kubelet[1798]: W0213 20:15:36.303922 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.306213 kubelet[1798]: E0213 20:15:36.303949 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.306213 kubelet[1798]: E0213 20:15:36.305286 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.306213 kubelet[1798]: W0213 20:15:36.305327 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.306213 kubelet[1798]: E0213 20:15:36.305349 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.308425 kubelet[1798]: E0213 20:15:36.306411 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.308425 kubelet[1798]: W0213 20:15:36.306431 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.308425 kubelet[1798]: E0213 20:15:36.306459 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.308425 kubelet[1798]: E0213 20:15:36.306791 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.308425 kubelet[1798]: W0213 20:15:36.306808 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.308425 kubelet[1798]: E0213 20:15:36.306826 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.316685 kubelet[1798]: E0213 20:15:36.316528 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:36.316685 kubelet[1798]: W0213 20:15:36.316574 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:36.316685 kubelet[1798]: E0213 20:15:36.316608 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:36.458222 kubelet[1798]: E0213 20:15:36.457697 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:36.458899 containerd[1470]: time="2025-02-13T20:15:36.458842794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c52cx,Uid:9b469396-d36c-431a-ba11-4e77366c4686,Namespace:calico-system,Attempt:0,}" Feb 13 20:15:36.468960 kubelet[1798]: E0213 20:15:36.468468 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:36.469679 containerd[1470]: time="2025-02-13T20:15:36.469582916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdwd5,Uid:4fca6fc6-8073-4093-a09a-92a93b47ae8f,Namespace:kube-system,Attempt:0,}" Feb 13 20:15:36.470757 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Feb 13 20:15:37.099793 containerd[1470]: time="2025-02-13T20:15:37.099532160Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:37.110476 containerd[1470]: time="2025-02-13T20:15:37.109900941Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Feb 13 20:15:37.110476 containerd[1470]: time="2025-02-13T20:15:37.110024749Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:37.111657 containerd[1470]: time="2025-02-13T20:15:37.111581477Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Feb 13 20:15:37.112477 containerd[1470]: time="2025-02-13T20:15:37.112215670Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:37.115995 containerd[1470]: time="2025-02-13T20:15:37.115889345Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Feb 13 20:15:37.118629 containerd[1470]: time="2025-02-13T20:15:37.117172434Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 657.643085ms" Feb 13 20:15:37.119052 containerd[1470]: time="2025-02-13T20:15:37.118965300Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 649.180871ms" Feb 13 20:15:37.119595 kubelet[1798]: E0213 20:15:37.119500 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:37.280784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount341291288.mount: Deactivated successfully. Feb 13 20:15:37.307067 containerd[1470]: time="2025-02-13T20:15:37.306759832Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:37.307067 containerd[1470]: time="2025-02-13T20:15:37.306943288Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:37.307508 containerd[1470]: time="2025-02-13T20:15:37.306815297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:15:37.307508 containerd[1470]: time="2025-02-13T20:15:37.307128071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:15:37.307508 containerd[1470]: time="2025-02-13T20:15:37.307155038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:37.307508 containerd[1470]: time="2025-02-13T20:15:37.307341273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:37.308185 containerd[1470]: time="2025-02-13T20:15:37.307013335Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:37.309098 containerd[1470]: time="2025-02-13T20:15:37.308961825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:15:37.429665 systemd[1]: run-containerd-runc-k8s.io-6988c1f0aa3a0bba8b15cfb2d665466219847cd274a58a769820c290da78e105-runc.RCra5T.mount: Deactivated successfully. Feb 13 20:15:37.448579 systemd[1]: Started cri-containerd-6988c1f0aa3a0bba8b15cfb2d665466219847cd274a58a769820c290da78e105.scope - libcontainer container 6988c1f0aa3a0bba8b15cfb2d665466219847cd274a58a769820c290da78e105. Feb 13 20:15:37.460560 systemd[1]: Started cri-containerd-54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590.scope - libcontainer container 54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590. Feb 13 20:15:37.525902 containerd[1470]: time="2025-02-13T20:15:37.525409926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bdwd5,Uid:4fca6fc6-8073-4093-a09a-92a93b47ae8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"6988c1f0aa3a0bba8b15cfb2d665466219847cd274a58a769820c290da78e105\"" Feb 13 20:15:37.528669 kubelet[1798]: E0213 20:15:37.528599 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:37.533475 containerd[1470]: time="2025-02-13T20:15:37.533183299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-c52cx,Uid:9b469396-d36c-431a-ba11-4e77366c4686,Namespace:calico-system,Attempt:0,} returns sandbox id \"54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590\"" Feb 13 20:15:37.534840 containerd[1470]: time="2025-02-13T20:15:37.533218335Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\"" Feb 13 20:15:37.538223 kubelet[1798]: E0213 20:15:37.538169 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:38.120403 kubelet[1798]: E0213 20:15:38.120349 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:38.267844 kubelet[1798]: E0213 20:15:38.267778 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cn6j8" podUID="b7ff3472-c8b7-4078-b230-5a0559383bf9" Feb 13 20:15:39.121697 kubelet[1798]: E0213 20:15:39.121548 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:39.148142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1166125044.mount: Deactivated successfully. Feb 13 20:15:39.539127 systemd-resolved[1329]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Feb 13 20:15:39.846338 containerd[1470]: time="2025-02-13T20:15:39.846248971Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:39.847930 containerd[1470]: time="2025-02-13T20:15:39.847846970Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.2: active requests=0, bytes read=30908839" Feb 13 20:15:39.848363 containerd[1470]: time="2025-02-13T20:15:39.848325706Z" level=info msg="ImageCreate event name:\"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:39.851660 containerd[1470]: time="2025-02-13T20:15:39.851609666Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:39.852575 containerd[1470]: time="2025-02-13T20:15:39.852513310Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.2\" with image id \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\", repo tag \"registry.k8s.io/kube-proxy:v1.32.2\", repo digest \"registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d\", size \"30907858\" in 2.318497673s" Feb 13 20:15:39.852750 containerd[1470]: time="2025-02-13T20:15:39.852721515Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.2\" returns image reference \"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5\"" Feb 13 20:15:39.854603 containerd[1470]: time="2025-02-13T20:15:39.854550526Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\"" Feb 13 20:15:39.857674 containerd[1470]: time="2025-02-13T20:15:39.857395349Z" level=info msg="CreateContainer within sandbox \"6988c1f0aa3a0bba8b15cfb2d665466219847cd274a58a769820c290da78e105\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 13 20:15:39.882332 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401887717.mount: Deactivated successfully. Feb 13 20:15:39.885453 containerd[1470]: time="2025-02-13T20:15:39.885349262Z" level=info msg="CreateContainer within sandbox \"6988c1f0aa3a0bba8b15cfb2d665466219847cd274a58a769820c290da78e105\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f4a1a49d85022dfee923eaa2ea430e941fa3d4f15175f37f330513a4fd84ab9a\"" Feb 13 20:15:39.886692 containerd[1470]: time="2025-02-13T20:15:39.886602270Z" level=info msg="StartContainer for \"f4a1a49d85022dfee923eaa2ea430e941fa3d4f15175f37f330513a4fd84ab9a\"" Feb 13 20:15:39.950250 systemd[1]: Started cri-containerd-f4a1a49d85022dfee923eaa2ea430e941fa3d4f15175f37f330513a4fd84ab9a.scope - libcontainer container f4a1a49d85022dfee923eaa2ea430e941fa3d4f15175f37f330513a4fd84ab9a. Feb 13 20:15:40.028732 containerd[1470]: time="2025-02-13T20:15:40.028563184Z" level=info msg="StartContainer for \"f4a1a49d85022dfee923eaa2ea430e941fa3d4f15175f37f330513a4fd84ab9a\" returns successfully" Feb 13 20:15:40.124504 kubelet[1798]: E0213 20:15:40.124245 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:40.265074 kubelet[1798]: E0213 20:15:40.264429 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cn6j8" podUID="b7ff3472-c8b7-4078-b230-5a0559383bf9" Feb 13 20:15:40.304240 kubelet[1798]: E0213 20:15:40.304131 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:40.318547 kubelet[1798]: I0213 20:15:40.318411 1798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bdwd5" podStartSLOduration=3.994941771 podStartE2EDuration="6.318387693s" podCreationTimestamp="2025-02-13 20:15:34 +0000 UTC" firstStartedPulling="2025-02-13 20:15:37.530821564 +0000 UTC m=+4.017372832" lastFinishedPulling="2025-02-13 20:15:39.854267486 +0000 UTC m=+6.340818754" observedRunningTime="2025-02-13 20:15:40.318125039 +0000 UTC m=+6.804676314" watchObservedRunningTime="2025-02-13 20:15:40.318387693 +0000 UTC m=+6.804938997" Feb 13 20:15:40.387884 kubelet[1798]: E0213 20:15:40.387503 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.387884 kubelet[1798]: W0213 20:15:40.387550 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.387884 kubelet[1798]: E0213 20:15:40.387581 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.389810 kubelet[1798]: E0213 20:15:40.388014 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.389810 kubelet[1798]: W0213 20:15:40.388035 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.389810 kubelet[1798]: E0213 20:15:40.388057 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.390746 kubelet[1798]: E0213 20:15:40.390018 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.390746 kubelet[1798]: W0213 20:15:40.390046 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.390746 kubelet[1798]: E0213 20:15:40.390095 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.390746 kubelet[1798]: E0213 20:15:40.390663 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.390746 kubelet[1798]: W0213 20:15:40.390682 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.390746 kubelet[1798]: E0213 20:15:40.390717 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.391748 kubelet[1798]: E0213 20:15:40.391211 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.391748 kubelet[1798]: W0213 20:15:40.391230 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.391748 kubelet[1798]: E0213 20:15:40.391252 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.391748 kubelet[1798]: E0213 20:15:40.391541 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.391748 kubelet[1798]: W0213 20:15:40.391555 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.391748 kubelet[1798]: E0213 20:15:40.391571 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.393297 kubelet[1798]: E0213 20:15:40.391808 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.393297 kubelet[1798]: W0213 20:15:40.391836 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.393297 kubelet[1798]: E0213 20:15:40.391849 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.393297 kubelet[1798]: E0213 20:15:40.392146 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.393297 kubelet[1798]: W0213 20:15:40.392155 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.393297 kubelet[1798]: E0213 20:15:40.392166 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.393297 kubelet[1798]: E0213 20:15:40.392398 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.393297 kubelet[1798]: W0213 20:15:40.392407 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.393297 kubelet[1798]: E0213 20:15:40.392417 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.393297 kubelet[1798]: E0213 20:15:40.392598 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.394597 kubelet[1798]: W0213 20:15:40.392623 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.394597 kubelet[1798]: E0213 20:15:40.392635 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.394597 kubelet[1798]: E0213 20:15:40.392873 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.394597 kubelet[1798]: W0213 20:15:40.392883 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.394597 kubelet[1798]: E0213 20:15:40.392893 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.394597 kubelet[1798]: E0213 20:15:40.393112 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.394597 kubelet[1798]: W0213 20:15:40.393123 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.394597 kubelet[1798]: E0213 20:15:40.393136 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.394597 kubelet[1798]: E0213 20:15:40.393386 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.394597 kubelet[1798]: W0213 20:15:40.393396 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.395065 kubelet[1798]: E0213 20:15:40.393405 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.395065 kubelet[1798]: E0213 20:15:40.393567 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.395065 kubelet[1798]: W0213 20:15:40.393575 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.395065 kubelet[1798]: E0213 20:15:40.393583 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.395065 kubelet[1798]: E0213 20:15:40.393753 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.395065 kubelet[1798]: W0213 20:15:40.393760 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.395065 kubelet[1798]: E0213 20:15:40.393768 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.395065 kubelet[1798]: E0213 20:15:40.394030 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.395065 kubelet[1798]: W0213 20:15:40.394043 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.395065 kubelet[1798]: E0213 20:15:40.394054 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.396122 kubelet[1798]: E0213 20:15:40.394271 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.396122 kubelet[1798]: W0213 20:15:40.394281 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.396122 kubelet[1798]: E0213 20:15:40.394289 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.396122 kubelet[1798]: E0213 20:15:40.394438 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.396122 kubelet[1798]: W0213 20:15:40.394445 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.396122 kubelet[1798]: E0213 20:15:40.394452 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.396122 kubelet[1798]: E0213 20:15:40.394604 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.396122 kubelet[1798]: W0213 20:15:40.394612 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.396122 kubelet[1798]: E0213 20:15:40.394620 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.396122 kubelet[1798]: E0213 20:15:40.394763 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.396884 kubelet[1798]: W0213 20:15:40.394771 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.396884 kubelet[1798]: E0213 20:15:40.394778 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.401044 kubelet[1798]: E0213 20:15:40.400902 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.401044 kubelet[1798]: W0213 20:15:40.400941 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.401866 kubelet[1798]: E0213 20:15:40.401452 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.402308 kubelet[1798]: E0213 20:15:40.402108 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.402308 kubelet[1798]: W0213 20:15:40.402129 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.402308 kubelet[1798]: E0213 20:15:40.402192 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.402896 kubelet[1798]: E0213 20:15:40.402750 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.402896 kubelet[1798]: W0213 20:15:40.402768 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.402896 kubelet[1798]: E0213 20:15:40.402807 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.403692 kubelet[1798]: E0213 20:15:40.403414 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.403692 kubelet[1798]: W0213 20:15:40.403436 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.403692 kubelet[1798]: E0213 20:15:40.403548 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.404074 kubelet[1798]: E0213 20:15:40.403900 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.404074 kubelet[1798]: W0213 20:15:40.403914 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.404074 kubelet[1798]: E0213 20:15:40.403936 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.404616 kubelet[1798]: E0213 20:15:40.404483 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.404616 kubelet[1798]: W0213 20:15:40.404500 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.404616 kubelet[1798]: E0213 20:15:40.404552 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.405149 kubelet[1798]: E0213 20:15:40.405007 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.405149 kubelet[1798]: W0213 20:15:40.405023 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.405149 kubelet[1798]: E0213 20:15:40.405060 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.405848 kubelet[1798]: E0213 20:15:40.405623 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.405848 kubelet[1798]: W0213 20:15:40.405644 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.405848 kubelet[1798]: E0213 20:15:40.405676 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.406406 kubelet[1798]: E0213 20:15:40.406169 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.406406 kubelet[1798]: W0213 20:15:40.406185 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.406406 kubelet[1798]: E0213 20:15:40.406215 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.407352 kubelet[1798]: E0213 20:15:40.407095 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.407352 kubelet[1798]: W0213 20:15:40.407113 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.407352 kubelet[1798]: E0213 20:15:40.407137 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.407928 kubelet[1798]: E0213 20:15:40.407904 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.407928 kubelet[1798]: W0213 20:15:40.407923 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.408074 kubelet[1798]: E0213 20:15:40.407943 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:40.409866 kubelet[1798]: E0213 20:15:40.409715 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:40.409866 kubelet[1798]: W0213 20:15:40.409744 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:40.409866 kubelet[1798]: E0213 20:15:40.409779 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.125378 kubelet[1798]: E0213 20:15:41.125286 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:41.314492 kubelet[1798]: E0213 20:15:41.314413 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:41.387182 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1825228568.mount: Deactivated successfully. Feb 13 20:15:41.402511 kubelet[1798]: E0213 20:15:41.402382 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.402511 kubelet[1798]: W0213 20:15:41.402427 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.402511 kubelet[1798]: E0213 20:15:41.402458 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.403619 kubelet[1798]: E0213 20:15:41.402927 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.403619 kubelet[1798]: W0213 20:15:41.402967 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.403619 kubelet[1798]: E0213 20:15:41.403087 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.403867 kubelet[1798]: E0213 20:15:41.403703 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.403867 kubelet[1798]: W0213 20:15:41.403723 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.403867 kubelet[1798]: E0213 20:15:41.403745 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.404798 kubelet[1798]: E0213 20:15:41.404763 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.404798 kubelet[1798]: W0213 20:15:41.404791 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.404942 kubelet[1798]: E0213 20:15:41.404824 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.405704 kubelet[1798]: E0213 20:15:41.405661 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.405877 kubelet[1798]: W0213 20:15:41.405847 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.406037 kubelet[1798]: E0213 20:15:41.405889 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.406609 kubelet[1798]: E0213 20:15:41.406582 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.406609 kubelet[1798]: W0213 20:15:41.406603 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.406712 kubelet[1798]: E0213 20:15:41.406622 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.407373 kubelet[1798]: E0213 20:15:41.407149 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.407373 kubelet[1798]: W0213 20:15:41.407168 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.407373 kubelet[1798]: E0213 20:15:41.407185 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.407812 kubelet[1798]: E0213 20:15:41.407784 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.407812 kubelet[1798]: W0213 20:15:41.407802 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.407937 kubelet[1798]: E0213 20:15:41.407818 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.408428 kubelet[1798]: E0213 20:15:41.408390 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.408428 kubelet[1798]: W0213 20:15:41.408417 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.408523 kubelet[1798]: E0213 20:15:41.408433 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.409362 kubelet[1798]: E0213 20:15:41.409319 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.409362 kubelet[1798]: W0213 20:15:41.409337 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.409362 kubelet[1798]: E0213 20:15:41.409358 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.409747 kubelet[1798]: E0213 20:15:41.409721 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.409747 kubelet[1798]: W0213 20:15:41.409739 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.409825 kubelet[1798]: E0213 20:15:41.409755 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.410152 kubelet[1798]: E0213 20:15:41.410116 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.410152 kubelet[1798]: W0213 20:15:41.410143 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.410240 kubelet[1798]: E0213 20:15:41.410163 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.410843 kubelet[1798]: E0213 20:15:41.410822 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.410843 kubelet[1798]: W0213 20:15:41.410842 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.410932 kubelet[1798]: E0213 20:15:41.410864 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.411932 kubelet[1798]: E0213 20:15:41.411897 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.411932 kubelet[1798]: W0213 20:15:41.411920 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.411932 kubelet[1798]: E0213 20:15:41.411937 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.412333 kubelet[1798]: E0213 20:15:41.412312 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.412333 kubelet[1798]: W0213 20:15:41.412331 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.412480 kubelet[1798]: E0213 20:15:41.412349 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.413105 kubelet[1798]: E0213 20:15:41.413080 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.413105 kubelet[1798]: W0213 20:15:41.413102 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.413472 kubelet[1798]: E0213 20:15:41.413123 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.413594 kubelet[1798]: E0213 20:15:41.413574 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.413625 kubelet[1798]: W0213 20:15:41.413596 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.413625 kubelet[1798]: E0213 20:15:41.413613 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.414388 kubelet[1798]: E0213 20:15:41.414357 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.414388 kubelet[1798]: W0213 20:15:41.414378 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.414509 kubelet[1798]: E0213 20:15:41.414395 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.415021 kubelet[1798]: E0213 20:15:41.414958 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.415021 kubelet[1798]: W0213 20:15:41.415002 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.415021 kubelet[1798]: E0213 20:15:41.415021 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.416798 kubelet[1798]: E0213 20:15:41.416751 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.416944 kubelet[1798]: W0213 20:15:41.416780 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.416944 kubelet[1798]: E0213 20:15:41.416895 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.417839 kubelet[1798]: E0213 20:15:41.417816 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.417839 kubelet[1798]: W0213 20:15:41.417837 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.418251 kubelet[1798]: E0213 20:15:41.417857 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.418937 kubelet[1798]: E0213 20:15:41.418429 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.418937 kubelet[1798]: W0213 20:15:41.418448 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.418937 kubelet[1798]: E0213 20:15:41.418466 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.419396 kubelet[1798]: E0213 20:15:41.419375 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.419396 kubelet[1798]: W0213 20:15:41.419394 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.419512 kubelet[1798]: E0213 20:15:41.419419 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.420078 kubelet[1798]: E0213 20:15:41.420055 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.420078 kubelet[1798]: W0213 20:15:41.420074 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.420437 kubelet[1798]: E0213 20:15:41.420124 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.420486 kubelet[1798]: E0213 20:15:41.420363 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.420486 kubelet[1798]: W0213 20:15:41.420476 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.421017 kubelet[1798]: E0213 20:15:41.420591 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.421100 kubelet[1798]: E0213 20:15:41.421018 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.421100 kubelet[1798]: W0213 20:15:41.421036 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.421445 kubelet[1798]: E0213 20:15:41.421301 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.421870 kubelet[1798]: E0213 20:15:41.421616 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.421870 kubelet[1798]: W0213 20:15:41.421633 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.421870 kubelet[1798]: E0213 20:15:41.421657 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.422601 kubelet[1798]: E0213 20:15:41.422271 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.422601 kubelet[1798]: W0213 20:15:41.422296 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.422601 kubelet[1798]: E0213 20:15:41.422320 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.423311 kubelet[1798]: E0213 20:15:41.423276 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.423311 kubelet[1798]: W0213 20:15:41.423308 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.423564 kubelet[1798]: E0213 20:15:41.423349 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.423829 kubelet[1798]: E0213 20:15:41.423627 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.423829 kubelet[1798]: W0213 20:15:41.423644 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.423829 kubelet[1798]: E0213 20:15:41.423662 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.424092 kubelet[1798]: E0213 20:15:41.424071 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.424157 kubelet[1798]: W0213 20:15:41.424095 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.424157 kubelet[1798]: E0213 20:15:41.424112 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.425005 kubelet[1798]: E0213 20:15:41.424808 1798 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Feb 13 20:15:41.425005 kubelet[1798]: W0213 20:15:41.424849 1798 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Feb 13 20:15:41.425005 kubelet[1798]: E0213 20:15:41.424866 1798 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Feb 13 20:15:41.565079 containerd[1470]: time="2025-02-13T20:15:41.562483181Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:41.566868 containerd[1470]: time="2025-02-13T20:15:41.566761538Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1: active requests=0, bytes read=6855343" Feb 13 20:15:41.568083 containerd[1470]: time="2025-02-13T20:15:41.568009024Z" level=info msg="ImageCreate event name:\"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:41.571071 containerd[1470]: time="2025-02-13T20:15:41.570947423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:41.572322 containerd[1470]: time="2025-02-13T20:15:41.572247152Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" with image id \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:a63f8b4ff531912d12d143664eb263fdbc6cd7b3ff4aa777dfb6e318a090462c\", size \"6855165\" in 1.717647656s" Feb 13 20:15:41.572322 containerd[1470]: time="2025-02-13T20:15:41.572317289Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.1\" returns image reference \"sha256:2b7452b763ec8833ca0386ada5fd066e552a9b3b02b8538a5e34cc3d6d3840a6\"" Feb 13 20:15:41.577529 containerd[1470]: time="2025-02-13T20:15:41.577446954Z" level=info msg="CreateContainer within sandbox \"54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Feb 13 20:15:41.599149 containerd[1470]: time="2025-02-13T20:15:41.598935767Z" level=info msg="CreateContainer within sandbox \"54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1\"" Feb 13 20:15:41.601466 containerd[1470]: time="2025-02-13T20:15:41.599985183Z" level=info msg="StartContainer for \"1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1\"" Feb 13 20:15:41.658314 systemd[1]: Started cri-containerd-1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1.scope - libcontainer container 1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1. Feb 13 20:15:41.720381 containerd[1470]: time="2025-02-13T20:15:41.720296361Z" level=info msg="StartContainer for \"1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1\" returns successfully" Feb 13 20:15:41.736087 systemd[1]: cri-containerd-1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1.scope: Deactivated successfully. Feb 13 20:15:41.799573 containerd[1470]: time="2025-02-13T20:15:41.799440025Z" level=info msg="shim disconnected" id=1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1 namespace=k8s.io Feb 13 20:15:41.799573 containerd[1470]: time="2025-02-13T20:15:41.799567192Z" level=warning msg="cleaning up after shim disconnected" id=1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1 namespace=k8s.io Feb 13 20:15:41.799573 containerd[1470]: time="2025-02-13T20:15:41.799583732Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:15:42.126103 kubelet[1798]: E0213 20:15:42.126030 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:42.264900 kubelet[1798]: E0213 20:15:42.264443 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cn6j8" podUID="b7ff3472-c8b7-4078-b230-5a0559383bf9" Feb 13 20:15:42.318363 kubelet[1798]: E0213 20:15:42.318169 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:42.319234 containerd[1470]: time="2025-02-13T20:15:42.319016060Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\"" Feb 13 20:15:42.332597 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1c07fc8e1c4de53c9ee9d619d693e72eea3228160086a79382dd0cc5c043c8a1-rootfs.mount: Deactivated successfully. Feb 13 20:15:42.610393 systemd-resolved[1329]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Feb 13 20:15:43.127235 kubelet[1798]: E0213 20:15:43.127157 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:44.128500 kubelet[1798]: E0213 20:15:44.128350 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:44.265473 kubelet[1798]: E0213 20:15:44.264719 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cn6j8" podUID="b7ff3472-c8b7-4078-b230-5a0559383bf9" Feb 13 20:15:45.128969 kubelet[1798]: E0213 20:15:45.128912 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:46.129551 kubelet[1798]: E0213 20:15:46.129499 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:46.265590 kubelet[1798]: E0213 20:15:46.265520 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-cn6j8" podUID="b7ff3472-c8b7-4078-b230-5a0559383bf9" Feb 13 20:15:47.119716 containerd[1470]: time="2025-02-13T20:15:47.119647322Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:47.121088 containerd[1470]: time="2025-02-13T20:15:47.121012536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.1: active requests=0, bytes read=96154154" Feb 13 20:15:47.122219 containerd[1470]: time="2025-02-13T20:15:47.121820884Z" level=info msg="ImageCreate event name:\"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:47.125078 containerd[1470]: time="2025-02-13T20:15:47.125022594Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:47.126963 containerd[1470]: time="2025-02-13T20:15:47.126768745Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.1\" with image id \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:21e759d51c90dfb34fc1397dc180dd3a3fb564c2b0580d2f61ffe108f2a3c94b\", size \"97647238\" in 4.807706933s" Feb 13 20:15:47.126963 containerd[1470]: time="2025-02-13T20:15:47.126823937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.1\" returns image reference \"sha256:7dd6ea186aba0d7a1791a79d426fe854527ca95192b26bbd19e8baf8373f7d0e\"" Feb 13 20:15:47.130469 kubelet[1798]: E0213 20:15:47.130378 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:47.131192 containerd[1470]: time="2025-02-13T20:15:47.130519636Z" level=info msg="CreateContainer within sandbox \"54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 13 20:15:47.145863 containerd[1470]: time="2025-02-13T20:15:47.145802510Z" level=info msg="CreateContainer within sandbox \"54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d\"" Feb 13 20:15:47.146814 containerd[1470]: time="2025-02-13T20:15:47.146778908Z" level=info msg="StartContainer for \"e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d\"" Feb 13 20:15:47.196316 systemd[1]: Started cri-containerd-e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d.scope - libcontainer container e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d. Feb 13 20:15:47.239174 containerd[1470]: time="2025-02-13T20:15:47.238930431Z" level=info msg="StartContainer for \"e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d\" returns successfully" Feb 13 20:15:47.336957 kubelet[1798]: E0213 20:15:47.336310 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:47.932581 containerd[1470]: time="2025-02-13T20:15:47.932500941Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 13 20:15:47.936530 systemd[1]: cri-containerd-e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d.scope: Deactivated successfully. Feb 13 20:15:47.968736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d-rootfs.mount: Deactivated successfully. Feb 13 20:15:48.019110 kubelet[1798]: I0213 20:15:48.018845 1798 kubelet_node_status.go:502] "Fast updating node status as it just became ready" Feb 13 20:15:48.032099 containerd[1470]: time="2025-02-13T20:15:48.032018846Z" level=info msg="shim disconnected" id=e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d namespace=k8s.io Feb 13 20:15:48.032099 containerd[1470]: time="2025-02-13T20:15:48.032090514Z" level=warning msg="cleaning up after shim disconnected" id=e3d83fe4c481f454a49cb24fc45a9e5fad8926193641d942086b7cc3150c424d namespace=k8s.io Feb 13 20:15:48.032099 containerd[1470]: time="2025-02-13T20:15:48.032103899Z" level=info msg="cleaning up dead shim" namespace=k8s.io Feb 13 20:15:48.130754 kubelet[1798]: E0213 20:15:48.130683 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:48.273081 systemd[1]: Created slice kubepods-besteffort-podb7ff3472_c8b7_4078_b230_5a0559383bf9.slice - libcontainer container kubepods-besteffort-podb7ff3472_c8b7_4078_b230_5a0559383bf9.slice. Feb 13 20:15:48.277044 containerd[1470]: time="2025-02-13T20:15:48.276965088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cn6j8,Uid:b7ff3472-c8b7-4078-b230-5a0559383bf9,Namespace:calico-system,Attempt:0,}" Feb 13 20:15:48.343147 kubelet[1798]: E0213 20:15:48.342523 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:48.345936 containerd[1470]: time="2025-02-13T20:15:48.345637186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\"" Feb 13 20:15:48.387402 containerd[1470]: time="2025-02-13T20:15:48.387328022Z" level=error msg="Failed to destroy network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:48.391636 containerd[1470]: time="2025-02-13T20:15:48.387873640Z" level=error msg="encountered an error cleaning up failed sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:48.391636 containerd[1470]: time="2025-02-13T20:15:48.387953692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cn6j8,Uid:b7ff3472-c8b7-4078-b230-5a0559383bf9,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:48.390449 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d-shm.mount: Deactivated successfully. Feb 13 20:15:48.392097 kubelet[1798]: E0213 20:15:48.391946 1798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:48.392223 kubelet[1798]: E0213 20:15:48.392168 1798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cn6j8" Feb 13 20:15:48.392282 kubelet[1798]: E0213 20:15:48.392212 1798 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-cn6j8" Feb 13 20:15:48.393879 kubelet[1798]: E0213 20:15:48.392856 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-cn6j8_calico-system(b7ff3472-c8b7-4078-b230-5a0559383bf9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-cn6j8_calico-system(b7ff3472-c8b7-4078-b230-5a0559383bf9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cn6j8" podUID="b7ff3472-c8b7-4078-b230-5a0559383bf9" Feb 13 20:15:49.131600 kubelet[1798]: E0213 20:15:49.131398 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:49.346148 kubelet[1798]: I0213 20:15:49.345370 1798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:15:49.346844 containerd[1470]: time="2025-02-13T20:15:49.346796526Z" level=info msg="StopPodSandbox for \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\"" Feb 13 20:15:49.347991 containerd[1470]: time="2025-02-13T20:15:49.347642605Z" level=info msg="Ensure that sandbox 163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d in task-service has been cleanup successfully" Feb 13 20:15:49.389828 containerd[1470]: time="2025-02-13T20:15:49.388710474Z" level=error msg="StopPodSandbox for \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\" failed" error="failed to destroy network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:49.389998 kubelet[1798]: E0213 20:15:49.389098 1798 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:15:49.389998 kubelet[1798]: E0213 20:15:49.389294 1798 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d"} Feb 13 20:15:49.389998 kubelet[1798]: E0213 20:15:49.389407 1798 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b7ff3472-c8b7-4078-b230-5a0559383bf9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:15:49.389998 kubelet[1798]: E0213 20:15:49.389441 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b7ff3472-c8b7-4078-b230-5a0559383bf9\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-cn6j8" podUID="b7ff3472-c8b7-4078-b230-5a0559383bf9" Feb 13 20:15:50.132537 kubelet[1798]: E0213 20:15:50.132446 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:51.133112 kubelet[1798]: E0213 20:15:51.133057 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:51.625042 systemd[1]: Created slice kubepods-besteffort-pod7106fb93_418a_4cbb_8a68_40fd69a99bfe.slice - libcontainer container kubepods-besteffort-pod7106fb93_418a_4cbb_8a68_40fd69a99bfe.slice. Feb 13 20:15:51.694023 kubelet[1798]: I0213 20:15:51.693383 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvfg7\" (UniqueName: \"kubernetes.io/projected/7106fb93-418a-4cbb-8a68-40fd69a99bfe-kube-api-access-kvfg7\") pod \"nginx-deployment-7fcdb87857-mwpdh\" (UID: \"7106fb93-418a-4cbb-8a68-40fd69a99bfe\") " pod="default/nginx-deployment-7fcdb87857-mwpdh" Feb 13 20:15:51.932326 containerd[1470]: time="2025-02-13T20:15:51.932139894Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mwpdh,Uid:7106fb93-418a-4cbb-8a68-40fd69a99bfe,Namespace:default,Attempt:0,}" Feb 13 20:15:52.074803 containerd[1470]: time="2025-02-13T20:15:52.074747590Z" level=error msg="Failed to destroy network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:52.077728 containerd[1470]: time="2025-02-13T20:15:52.077422487Z" level=error msg="encountered an error cleaning up failed sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:52.077728 containerd[1470]: time="2025-02-13T20:15:52.077516872Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mwpdh,Uid:7106fb93-418a-4cbb-8a68-40fd69a99bfe,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:52.079480 kubelet[1798]: E0213 20:15:52.078044 1798 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:52.079480 kubelet[1798]: E0213 20:15:52.078150 1798 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-mwpdh" Feb 13 20:15:52.079480 kubelet[1798]: E0213 20:15:52.078184 1798 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-7fcdb87857-mwpdh" Feb 13 20:15:52.078434 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61-shm.mount: Deactivated successfully. Feb 13 20:15:52.080194 kubelet[1798]: E0213 20:15:52.078373 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-7fcdb87857-mwpdh_default(7106fb93-418a-4cbb-8a68-40fd69a99bfe)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-7fcdb87857-mwpdh_default(7106fb93-418a-4cbb-8a68-40fd69a99bfe)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-mwpdh" podUID="7106fb93-418a-4cbb-8a68-40fd69a99bfe" Feb 13 20:15:52.135006 kubelet[1798]: E0213 20:15:52.134928 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:52.359082 kubelet[1798]: I0213 20:15:52.359017 1798 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:15:52.360665 containerd[1470]: time="2025-02-13T20:15:52.360078974Z" level=info msg="StopPodSandbox for \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\"" Feb 13 20:15:52.360665 containerd[1470]: time="2025-02-13T20:15:52.360325926Z" level=info msg="Ensure that sandbox bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61 in task-service has been cleanup successfully" Feb 13 20:15:52.420093 containerd[1470]: time="2025-02-13T20:15:52.420030692Z" level=error msg="StopPodSandbox for \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\" failed" error="failed to destroy network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Feb 13 20:15:52.420734 kubelet[1798]: E0213 20:15:52.420686 1798 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:15:52.420934 kubelet[1798]: E0213 20:15:52.420911 1798 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61"} Feb 13 20:15:52.421076 kubelet[1798]: E0213 20:15:52.421057 1798 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7106fb93-418a-4cbb-8a68-40fd69a99bfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Feb 13 20:15:52.421412 kubelet[1798]: E0213 20:15:52.421357 1798 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7106fb93-418a-4cbb-8a68-40fd69a99bfe\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-7fcdb87857-mwpdh" podUID="7106fb93-418a-4cbb-8a68-40fd69a99bfe" Feb 13 20:15:53.136023 kubelet[1798]: E0213 20:15:53.135917 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:54.112553 kubelet[1798]: E0213 20:15:54.112452 1798 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:54.137233 kubelet[1798]: E0213 20:15:54.137069 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:54.663929 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2207584374.mount: Deactivated successfully. Feb 13 20:15:54.707568 containerd[1470]: time="2025-02-13T20:15:54.706172677Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:54.707568 containerd[1470]: time="2025-02-13T20:15:54.706982092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.1: active requests=0, bytes read=142742010" Feb 13 20:15:54.707568 containerd[1470]: time="2025-02-13T20:15:54.707349928Z" level=info msg="ImageCreate event name:\"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:54.710117 containerd[1470]: time="2025-02-13T20:15:54.710050506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:15:54.710862 containerd[1470]: time="2025-02-13T20:15:54.710807413Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.1\" with image id \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node@sha256:99c3917516efe1f807a0cfdf2d14b628b7c5cc6bd8a9ee5a253154f31756bea1\", size \"142741872\" in 6.365114042s" Feb 13 20:15:54.710862 containerd[1470]: time="2025-02-13T20:15:54.710861436Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.1\" returns image reference \"sha256:feb26d4585d68e875d9bd9bd6c27ea9f2d5c9ed9ef70f8b8cb0ebb0559a1d664\"" Feb 13 20:15:54.738543 containerd[1470]: time="2025-02-13T20:15:54.738501860Z" level=info msg="CreateContainer within sandbox \"54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Feb 13 20:15:54.759732 containerd[1470]: time="2025-02-13T20:15:54.759679297Z" level=info msg="CreateContainer within sandbox \"54e982419b0230904d8ff19f127293320e304a7000f5815a1237e32580057590\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c24afd7e4f99935fde440dcd06bc05441febcc59e1cfca5423f3f094907cb275\"" Feb 13 20:15:54.761243 containerd[1470]: time="2025-02-13T20:15:54.760716971Z" level=info msg="StartContainer for \"c24afd7e4f99935fde440dcd06bc05441febcc59e1cfca5423f3f094907cb275\"" Feb 13 20:15:54.857480 systemd[1]: Started cri-containerd-c24afd7e4f99935fde440dcd06bc05441febcc59e1cfca5423f3f094907cb275.scope - libcontainer container c24afd7e4f99935fde440dcd06bc05441febcc59e1cfca5423f3f094907cb275. Feb 13 20:15:54.899382 containerd[1470]: time="2025-02-13T20:15:54.899244518Z" level=info msg="StartContainer for \"c24afd7e4f99935fde440dcd06bc05441febcc59e1cfca5423f3f094907cb275\" returns successfully" Feb 13 20:15:55.005738 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Feb 13 20:15:55.005916 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Feb 13 20:15:55.137765 kubelet[1798]: E0213 20:15:55.137700 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:55.370905 kubelet[1798]: E0213 20:15:55.369063 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:55.400022 kubelet[1798]: I0213 20:15:55.399834 1798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-c52cx" podStartSLOduration=4.228236336 podStartE2EDuration="21.399815699s" podCreationTimestamp="2025-02-13 20:15:34 +0000 UTC" firstStartedPulling="2025-02-13 20:15:37.541147787 +0000 UTC m=+4.027699062" lastFinishedPulling="2025-02-13 20:15:54.712727173 +0000 UTC m=+21.199278425" observedRunningTime="2025-02-13 20:15:55.399760141 +0000 UTC m=+21.886311415" watchObservedRunningTime="2025-02-13 20:15:55.399815699 +0000 UTC m=+21.886366973" Feb 13 20:15:56.138830 kubelet[1798]: E0213 20:15:56.138757 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:56.371636 kubelet[1798]: E0213 20:15:56.371219 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:56.839041 kernel: bpftool[2656]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Feb 13 20:15:57.139471 kubelet[1798]: E0213 20:15:57.139249 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:57.163636 systemd-networkd[1367]: vxlan.calico: Link UP Feb 13 20:15:57.163648 systemd-networkd[1367]: vxlan.calico: Gained carrier Feb 13 20:15:57.376455 kubelet[1798]: E0213 20:15:57.376400 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:15:57.424961 systemd[1]: run-containerd-runc-k8s.io-c24afd7e4f99935fde440dcd06bc05441febcc59e1cfca5423f3f094907cb275-runc.T9HMVN.mount: Deactivated successfully. Feb 13 20:15:58.140431 kubelet[1798]: E0213 20:15:58.140338 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:15:58.550923 systemd-networkd[1367]: vxlan.calico: Gained IPv6LL Feb 13 20:15:59.141015 kubelet[1798]: E0213 20:15:59.140932 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:00.141659 kubelet[1798]: E0213 20:16:00.141591 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:01.142011 kubelet[1798]: E0213 20:16:01.141856 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:02.146533 kubelet[1798]: E0213 20:16:02.146402 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:03.149698 kubelet[1798]: E0213 20:16:03.147118 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:03.265516 containerd[1470]: time="2025-02-13T20:16:03.265012917Z" level=info msg="StopPodSandbox for \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\"" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.401 [INFO][2772] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.402 [INFO][2772] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" iface="eth0" netns="/var/run/netns/cni-36b02a48-075e-dcfc-e7f4-444a48fba8c3" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.403 [INFO][2772] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" iface="eth0" netns="/var/run/netns/cni-36b02a48-075e-dcfc-e7f4-444a48fba8c3" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.407 [INFO][2772] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" iface="eth0" netns="/var/run/netns/cni-36b02a48-075e-dcfc-e7f4-444a48fba8c3" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.407 [INFO][2772] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.407 [INFO][2772] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.507 [INFO][2778] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.507 [INFO][2778] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.507 [INFO][2778] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.530 [WARNING][2778] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.530 [INFO][2778] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.535 [INFO][2778] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:03.542189 containerd[1470]: 2025-02-13 20:16:03.539 [INFO][2772] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:03.545087 containerd[1470]: time="2025-02-13T20:16:03.544123299Z" level=info msg="TearDown network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\" successfully" Feb 13 20:16:03.545087 containerd[1470]: time="2025-02-13T20:16:03.544178871Z" level=info msg="StopPodSandbox for \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\" returns successfully" Feb 13 20:16:03.548040 containerd[1470]: time="2025-02-13T20:16:03.546287434Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cn6j8,Uid:b7ff3472-c8b7-4078-b230-5a0559383bf9,Namespace:calico-system,Attempt:1,}" Feb 13 20:16:03.549993 systemd[1]: run-netns-cni\x2d36b02a48\x2d075e\x2ddcfc\x2de7f4\x2d444a48fba8c3.mount: Deactivated successfully. Feb 13 20:16:03.839851 update_engine[1448]: I20250213 20:16:03.839039 1448 update_attempter.cc:509] Updating boot flags... Feb 13 20:16:03.842503 systemd-networkd[1367]: caliccd4eb57065: Link UP Feb 13 20:16:03.844466 systemd-networkd[1367]: caliccd4eb57065: Gained carrier Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.657 [INFO][2786] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {147.182.243.214-k8s-csi--node--driver--cn6j8-eth0 csi-node-driver- calico-system b7ff3472-c8b7-4078-b230-5a0559383bf9 1097 0 2025-02-13 20:15:34 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:84cddb44f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 147.182.243.214 csi-node-driver-cn6j8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] caliccd4eb57065 [] []}} ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Namespace="calico-system" Pod="csi-node-driver-cn6j8" WorkloadEndpoint="147.182.243.214-k8s-csi--node--driver--cn6j8-" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.658 [INFO][2786] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Namespace="calico-system" Pod="csi-node-driver-cn6j8" WorkloadEndpoint="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.748 [INFO][2797] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" HandleID="k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.772 [INFO][2797] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" HandleID="k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000291350), Attrs:map[string]string{"namespace":"calico-system", "node":"147.182.243.214", "pod":"csi-node-driver-cn6j8", "timestamp":"2025-02-13 20:16:03.748677479 +0000 UTC"}, Hostname:"147.182.243.214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.772 [INFO][2797] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.772 [INFO][2797] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.772 [INFO][2797] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '147.182.243.214' Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.777 [INFO][2797] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.786 [INFO][2797] ipam/ipam.go 372: Looking up existing affinities for host host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.796 [INFO][2797] ipam/ipam.go 489: Trying affinity for 192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.800 [INFO][2797] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.805 [INFO][2797] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.805 [INFO][2797] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.809 [INFO][2797] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396 Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.818 [INFO][2797] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.829 [INFO][2797] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.193/26] block=192.168.54.192/26 handle="k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.829 [INFO][2797] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.193/26] handle="k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" host="147.182.243.214" Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.829 [INFO][2797] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:03.873305 containerd[1470]: 2025-02-13 20:16:03.829 [INFO][2797] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.193/26] IPv6=[] ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" HandleID="k8s-pod-network.8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.875802 containerd[1470]: 2025-02-13 20:16:03.833 [INFO][2786] cni-plugin/k8s.go 386: Populated endpoint ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Namespace="calico-system" Pod="csi-node-driver-cn6j8" WorkloadEndpoint="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-csi--node--driver--cn6j8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7ff3472-c8b7-4078-b230-5a0559383bf9", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"", Pod:"csi-node-driver-cn6j8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccd4eb57065", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:03.875802 containerd[1470]: 2025-02-13 20:16:03.833 [INFO][2786] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.193/32] ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Namespace="calico-system" Pod="csi-node-driver-cn6j8" WorkloadEndpoint="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.875802 containerd[1470]: 2025-02-13 20:16:03.833 [INFO][2786] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliccd4eb57065 ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Namespace="calico-system" Pod="csi-node-driver-cn6j8" WorkloadEndpoint="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.875802 containerd[1470]: 2025-02-13 20:16:03.846 [INFO][2786] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Namespace="calico-system" Pod="csi-node-driver-cn6j8" WorkloadEndpoint="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.875802 containerd[1470]: 2025-02-13 20:16:03.847 [INFO][2786] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Namespace="calico-system" Pod="csi-node-driver-cn6j8" WorkloadEndpoint="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-csi--node--driver--cn6j8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7ff3472-c8b7-4078-b230-5a0559383bf9", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396", Pod:"csi-node-driver-cn6j8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccd4eb57065", MAC:"3e:24:c8:e1:77:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:03.875802 containerd[1470]: 2025-02-13 20:16:03.866 [INFO][2786] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396" Namespace="calico-system" Pod="csi-node-driver-cn6j8" WorkloadEndpoint="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:03.951107 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2805) Feb 13 20:16:04.031862 containerd[1470]: time="2025-02-13T20:16:04.030600069Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:04.031862 containerd[1470]: time="2025-02-13T20:16:04.030688338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:04.031862 containerd[1470]: time="2025-02-13T20:16:04.030715204Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:04.031862 containerd[1470]: time="2025-02-13T20:16:04.030921190Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:04.067312 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 37 scanned by (udev-worker) (2816) Feb 13 20:16:04.149678 kubelet[1798]: E0213 20:16:04.149517 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:04.154501 systemd[1]: Started cri-containerd-8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396.scope - libcontainer container 8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396. Feb 13 20:16:04.280477 containerd[1470]: time="2025-02-13T20:16:04.280428736Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-cn6j8,Uid:b7ff3472-c8b7-4078-b230-5a0559383bf9,Namespace:calico-system,Attempt:1,} returns sandbox id \"8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396\"" Feb 13 20:16:04.283480 containerd[1470]: time="2025-02-13T20:16:04.283390197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\"" Feb 13 20:16:05.150329 kubelet[1798]: E0213 20:16:05.150220 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:05.657688 systemd-networkd[1367]: caliccd4eb57065: Gained IPv6LL Feb 13 20:16:05.937126 containerd[1470]: time="2025-02-13T20:16:05.936785371Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:05.943122 containerd[1470]: time="2025-02-13T20:16:05.942695502Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.1: active requests=0, bytes read=7902632" Feb 13 20:16:05.944469 containerd[1470]: time="2025-02-13T20:16:05.944330493Z" level=info msg="ImageCreate event name:\"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:05.949739 containerd[1470]: time="2025-02-13T20:16:05.949666422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:05.952901 containerd[1470]: time="2025-02-13T20:16:05.952459393Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.1\" with image id \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:eaa7e01fb16b603c155a67b81f16992281db7f831684c7b2081d3434587a7ff3\", size \"9395716\" in 1.669003805s" Feb 13 20:16:05.952901 containerd[1470]: time="2025-02-13T20:16:05.952530741Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.1\" returns image reference \"sha256:bda8c42e04758c4f061339e213f50ccdc7502c4176fbf631aa12357e62b63540\"" Feb 13 20:16:05.956969 containerd[1470]: time="2025-02-13T20:16:05.956910361Z" level=info msg="CreateContainer within sandbox \"8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Feb 13 20:16:05.989271 containerd[1470]: time="2025-02-13T20:16:05.988066446Z" level=info msg="CreateContainer within sandbox \"8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"97858c3a647f5a387c1d4d963f9b1185023feca1e3ed1707e2bf799b737a60da\"" Feb 13 20:16:05.991069 containerd[1470]: time="2025-02-13T20:16:05.990081479Z" level=info msg="StartContainer for \"97858c3a647f5a387c1d4d963f9b1185023feca1e3ed1707e2bf799b737a60da\"" Feb 13 20:16:06.056563 systemd[1]: Started cri-containerd-97858c3a647f5a387c1d4d963f9b1185023feca1e3ed1707e2bf799b737a60da.scope - libcontainer container 97858c3a647f5a387c1d4d963f9b1185023feca1e3ed1707e2bf799b737a60da. Feb 13 20:16:06.123709 containerd[1470]: time="2025-02-13T20:16:06.123556714Z" level=info msg="StartContainer for \"97858c3a647f5a387c1d4d963f9b1185023feca1e3ed1707e2bf799b737a60da\" returns successfully" Feb 13 20:16:06.128789 containerd[1470]: time="2025-02-13T20:16:06.128665567Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\"" Feb 13 20:16:06.151405 kubelet[1798]: E0213 20:16:06.151253 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:07.152416 kubelet[1798]: E0213 20:16:07.152296 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:07.265119 containerd[1470]: time="2025-02-13T20:16:07.264399651Z" level=info msg="StopPodSandbox for \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\"" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.363 [INFO][2921] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.363 [INFO][2921] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" iface="eth0" netns="/var/run/netns/cni-bb4e58be-f6f3-869e-1fc5-305e02c08348" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.363 [INFO][2921] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" iface="eth0" netns="/var/run/netns/cni-bb4e58be-f6f3-869e-1fc5-305e02c08348" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.364 [INFO][2921] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" iface="eth0" netns="/var/run/netns/cni-bb4e58be-f6f3-869e-1fc5-305e02c08348" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.364 [INFO][2921] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.364 [INFO][2921] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.393 [INFO][2927] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.394 [INFO][2927] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.394 [INFO][2927] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.404 [WARNING][2927] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.404 [INFO][2927] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.409 [INFO][2927] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:07.414050 containerd[1470]: 2025-02-13 20:16:07.411 [INFO][2921] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:07.418217 containerd[1470]: time="2025-02-13T20:16:07.416174973Z" level=info msg="TearDown network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\" successfully" Feb 13 20:16:07.418217 containerd[1470]: time="2025-02-13T20:16:07.416230164Z" level=info msg="StopPodSandbox for \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\" returns successfully" Feb 13 20:16:07.418217 containerd[1470]: time="2025-02-13T20:16:07.417164797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mwpdh,Uid:7106fb93-418a-4cbb-8a68-40fd69a99bfe,Namespace:default,Attempt:1,}" Feb 13 20:16:07.418946 systemd[1]: run-netns-cni\x2dbb4e58be\x2df6f3\x2d869e\x2d1fc5\x2d305e02c08348.mount: Deactivated successfully. Feb 13 20:16:07.684755 systemd-networkd[1367]: calie06b2d08f01: Link UP Feb 13 20:16:07.690298 systemd-networkd[1367]: calie06b2d08f01: Gained carrier Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.512 [INFO][2933] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0 nginx-deployment-7fcdb87857- default 7106fb93-418a-4cbb-8a68-40fd69a99bfe 1115 0 2025-02-13 20:15:51 +0000 UTC map[app:nginx pod-template-hash:7fcdb87857 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 147.182.243.214 nginx-deployment-7fcdb87857-mwpdh eth0 default [] [] [kns.default ksa.default.default] calie06b2d08f01 [] []}} ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Namespace="default" Pod="nginx-deployment-7fcdb87857-mwpdh" WorkloadEndpoint="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.512 [INFO][2933] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Namespace="default" Pod="nginx-deployment-7fcdb87857-mwpdh" WorkloadEndpoint="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.574 [INFO][2944] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" HandleID="k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.599 [INFO][2944] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" HandleID="k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000051950), Attrs:map[string]string{"namespace":"default", "node":"147.182.243.214", "pod":"nginx-deployment-7fcdb87857-mwpdh", "timestamp":"2025-02-13 20:16:07.574575529 +0000 UTC"}, Hostname:"147.182.243.214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.599 [INFO][2944] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.599 [INFO][2944] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.599 [INFO][2944] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '147.182.243.214' Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.604 [INFO][2944] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.614 [INFO][2944] ipam/ipam.go 372: Looking up existing affinities for host host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.626 [INFO][2944] ipam/ipam.go 489: Trying affinity for 192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.631 [INFO][2944] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.636 [INFO][2944] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.636 [INFO][2944] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.642 [INFO][2944] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734 Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.654 [INFO][2944] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.668 [INFO][2944] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.194/26] block=192.168.54.192/26 handle="k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.668 [INFO][2944] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.194/26] handle="k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" host="147.182.243.214" Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.668 [INFO][2944] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:07.707160 containerd[1470]: 2025-02-13 20:16:07.668 [INFO][2944] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.194/26] IPv6=[] ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" HandleID="k8s-pod-network.5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.713022 containerd[1470]: 2025-02-13 20:16:07.676 [INFO][2933] cni-plugin/k8s.go 386: Populated endpoint ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Namespace="default" Pod="nginx-deployment-7fcdb87857-mwpdh" WorkloadEndpoint="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"7106fb93-418a-4cbb-8a68-40fd69a99bfe", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"", Pod:"nginx-deployment-7fcdb87857-mwpdh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie06b2d08f01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:07.713022 containerd[1470]: 2025-02-13 20:16:07.677 [INFO][2933] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.194/32] ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Namespace="default" Pod="nginx-deployment-7fcdb87857-mwpdh" WorkloadEndpoint="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.713022 containerd[1470]: 2025-02-13 20:16:07.677 [INFO][2933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie06b2d08f01 ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Namespace="default" Pod="nginx-deployment-7fcdb87857-mwpdh" WorkloadEndpoint="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.713022 containerd[1470]: 2025-02-13 20:16:07.692 [INFO][2933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Namespace="default" Pod="nginx-deployment-7fcdb87857-mwpdh" WorkloadEndpoint="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.713022 containerd[1470]: 2025-02-13 20:16:07.693 [INFO][2933] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Namespace="default" Pod="nginx-deployment-7fcdb87857-mwpdh" WorkloadEndpoint="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"7106fb93-418a-4cbb-8a68-40fd69a99bfe", ResourceVersion:"1115", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734", Pod:"nginx-deployment-7fcdb87857-mwpdh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie06b2d08f01", MAC:"06:87:57:98:6a:46", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:07.713022 containerd[1470]: 2025-02-13 20:16:07.704 [INFO][2933] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734" Namespace="default" Pod="nginx-deployment-7fcdb87857-mwpdh" WorkloadEndpoint="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:07.795860 containerd[1470]: time="2025-02-13T20:16:07.795384893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:07.798130 containerd[1470]: time="2025-02-13T20:16:07.798025530Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:07.798130 containerd[1470]: time="2025-02-13T20:16:07.798075429Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:07.798371 containerd[1470]: time="2025-02-13T20:16:07.798260521Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:07.864943 systemd[1]: Started cri-containerd-5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734.scope - libcontainer container 5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734. Feb 13 20:16:07.983918 containerd[1470]: time="2025-02-13T20:16:07.983271207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-7fcdb87857-mwpdh,Uid:7106fb93-418a-4cbb-8a68-40fd69a99bfe,Namespace:default,Attempt:1,} returns sandbox id \"5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734\"" Feb 13 20:16:08.036494 containerd[1470]: time="2025-02-13T20:16:08.036418514Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:08.038640 containerd[1470]: time="2025-02-13T20:16:08.038541456Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1: active requests=0, bytes read=10501081" Feb 13 20:16:08.043341 containerd[1470]: time="2025-02-13T20:16:08.043207415Z" level=info msg="ImageCreate event name:\"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:08.058029 containerd[1470]: time="2025-02-13T20:16:08.056639213Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:08.060482 containerd[1470]: time="2025-02-13T20:16:08.060417530Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" with image id \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:a338da9488cbaa83c78457c3d7354d84149969c0480e88dd768e036632ff5b76\", size \"11994117\" in 1.931675506s" Feb 13 20:16:08.060482 containerd[1470]: time="2025-02-13T20:16:08.060481866Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.1\" returns image reference \"sha256:8b7d18f262d5cf6a6343578ad0db68a140c4c9989d9e02c58c27cb5d2c70320f\"" Feb 13 20:16:08.063172 containerd[1470]: time="2025-02-13T20:16:08.062180403Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:16:08.064490 containerd[1470]: time="2025-02-13T20:16:08.064440696Z" level=info msg="CreateContainer within sandbox \"8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Feb 13 20:16:08.083307 containerd[1470]: time="2025-02-13T20:16:08.083223103Z" level=info msg="CreateContainer within sandbox \"8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"52ab3898f629252828430639fea1323d06c3c4019bd4185b6811fccc5e403783\"" Feb 13 20:16:08.084995 containerd[1470]: time="2025-02-13T20:16:08.084659836Z" level=info msg="StartContainer for \"52ab3898f629252828430639fea1323d06c3c4019bd4185b6811fccc5e403783\"" Feb 13 20:16:08.134318 systemd[1]: Started cri-containerd-52ab3898f629252828430639fea1323d06c3c4019bd4185b6811fccc5e403783.scope - libcontainer container 52ab3898f629252828430639fea1323d06c3c4019bd4185b6811fccc5e403783. Feb 13 20:16:08.154071 kubelet[1798]: E0213 20:16:08.153191 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:08.193164 containerd[1470]: time="2025-02-13T20:16:08.192909278Z" level=info msg="StartContainer for \"52ab3898f629252828430639fea1323d06c3c4019bd4185b6811fccc5e403783\" returns successfully" Feb 13 20:16:08.278326 kubelet[1798]: I0213 20:16:08.277304 1798 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Feb 13 20:16:08.278326 kubelet[1798]: I0213 20:16:08.277398 1798 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Feb 13 20:16:09.106486 systemd-networkd[1367]: calie06b2d08f01: Gained IPv6LL Feb 13 20:16:09.154307 kubelet[1798]: E0213 20:16:09.154189 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:10.155942 kubelet[1798]: E0213 20:16:10.155788 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:11.143950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1099646157.mount: Deactivated successfully. Feb 13 20:16:11.157184 kubelet[1798]: E0213 20:16:11.157095 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:12.158212 kubelet[1798]: E0213 20:16:12.158118 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:12.759034 containerd[1470]: time="2025-02-13T20:16:12.756672406Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73054493" Feb 13 20:16:12.759034 containerd[1470]: time="2025-02-13T20:16:12.756762032Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.762905 containerd[1470]: time="2025-02-13T20:16:12.762831621Z" level=info msg="ImageCreate event name:\"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.765394 containerd[1470]: time="2025-02-13T20:16:12.765321710Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 4.703053429s" Feb 13 20:16:12.765848 containerd[1470]: time="2025-02-13T20:16:12.765809948Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:16:12.768835 containerd[1470]: time="2025-02-13T20:16:12.768630557Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:12.774637 containerd[1470]: time="2025-02-13T20:16:12.774574642Z" level=info msg="CreateContainer within sandbox \"5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 13 20:16:12.801533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2907787632.mount: Deactivated successfully. Feb 13 20:16:12.805935 containerd[1470]: time="2025-02-13T20:16:12.805856378Z" level=info msg="CreateContainer within sandbox \"5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"a1a5f222ed82c02a982fb0d1d482e7d0f8019c12cb5cf06cd04b5cefd1201ca2\"" Feb 13 20:16:12.808634 containerd[1470]: time="2025-02-13T20:16:12.807500517Z" level=info msg="StartContainer for \"a1a5f222ed82c02a982fb0d1d482e7d0f8019c12cb5cf06cd04b5cefd1201ca2\"" Feb 13 20:16:12.853788 systemd[1]: run-containerd-runc-k8s.io-a1a5f222ed82c02a982fb0d1d482e7d0f8019c12cb5cf06cd04b5cefd1201ca2-runc.w6LsXP.mount: Deactivated successfully. Feb 13 20:16:12.867281 systemd[1]: Started cri-containerd-a1a5f222ed82c02a982fb0d1d482e7d0f8019c12cb5cf06cd04b5cefd1201ca2.scope - libcontainer container a1a5f222ed82c02a982fb0d1d482e7d0f8019c12cb5cf06cd04b5cefd1201ca2. Feb 13 20:16:12.911106 containerd[1470]: time="2025-02-13T20:16:12.911031355Z" level=info msg="StartContainer for \"a1a5f222ed82c02a982fb0d1d482e7d0f8019c12cb5cf06cd04b5cefd1201ca2\" returns successfully" Feb 13 20:16:13.159368 kubelet[1798]: E0213 20:16:13.159306 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:13.453147 kubelet[1798]: I0213 20:16:13.452212 1798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-cn6j8" podStartSLOduration=35.673420005 podStartE2EDuration="39.45218712s" podCreationTimestamp="2025-02-13 20:15:34 +0000 UTC" firstStartedPulling="2025-02-13 20:16:04.283019071 +0000 UTC m=+30.769570325" lastFinishedPulling="2025-02-13 20:16:08.061786145 +0000 UTC m=+34.548337440" observedRunningTime="2025-02-13 20:16:08.439230989 +0000 UTC m=+34.925782280" watchObservedRunningTime="2025-02-13 20:16:13.45218712 +0000 UTC m=+39.938738398" Feb 13 20:16:14.112180 kubelet[1798]: E0213 20:16:14.112097 1798 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:14.160334 kubelet[1798]: E0213 20:16:14.160225 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:15.161216 kubelet[1798]: E0213 20:16:15.161124 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:16.161910 kubelet[1798]: E0213 20:16:16.161850 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:17.163137 kubelet[1798]: E0213 20:16:17.163074 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:18.163874 kubelet[1798]: E0213 20:16:18.163805 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:19.164283 kubelet[1798]: E0213 20:16:19.164217 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:20.164640 kubelet[1798]: E0213 20:16:20.164551 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:20.170797 kubelet[1798]: I0213 20:16:20.170688 1798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-7fcdb87857-mwpdh" podStartSLOduration=24.388948111 podStartE2EDuration="29.170659371s" podCreationTimestamp="2025-02-13 20:15:51 +0000 UTC" firstStartedPulling="2025-02-13 20:16:07.988424952 +0000 UTC m=+34.474976205" lastFinishedPulling="2025-02-13 20:16:12.770136194 +0000 UTC m=+39.256687465" observedRunningTime="2025-02-13 20:16:13.452967724 +0000 UTC m=+39.939519011" watchObservedRunningTime="2025-02-13 20:16:20.170659371 +0000 UTC m=+46.657210666" Feb 13 20:16:20.182536 systemd[1]: Created slice kubepods-besteffort-pod056af7a0_018a_4d6b_86fd_b73f32228d59.slice - libcontainer container kubepods-besteffort-pod056af7a0_018a_4d6b_86fd_b73f32228d59.slice. Feb 13 20:16:20.231230 kubelet[1798]: I0213 20:16:20.231119 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fq45\" (UniqueName: \"kubernetes.io/projected/056af7a0-018a-4d6b-86fd-b73f32228d59-kube-api-access-9fq45\") pod \"nfs-server-provisioner-0\" (UID: \"056af7a0-018a-4d6b-86fd-b73f32228d59\") " pod="default/nfs-server-provisioner-0" Feb 13 20:16:20.231230 kubelet[1798]: I0213 20:16:20.231202 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/056af7a0-018a-4d6b-86fd-b73f32228d59-data\") pod \"nfs-server-provisioner-0\" (UID: \"056af7a0-018a-4d6b-86fd-b73f32228d59\") " pod="default/nfs-server-provisioner-0" Feb 13 20:16:20.488713 containerd[1470]: time="2025-02-13T20:16:20.488122684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:056af7a0-018a-4d6b-86fd-b73f32228d59,Namespace:default,Attempt:0,}" Feb 13 20:16:20.771255 systemd-networkd[1367]: cali60e51b789ff: Link UP Feb 13 20:16:20.773198 systemd-networkd[1367]: cali60e51b789ff: Gained carrier Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.563 [INFO][3157] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {147.182.243.214-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 056af7a0-018a-4d6b-86fd-b73f32228d59 1177 0 2025-02-13 20:16:20 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 147.182.243.214 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.243.214-k8s-nfs--server--provisioner--0-" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.564 [INFO][3157] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.614 [INFO][3169] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" HandleID="k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Workload="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.629 [INFO][3169] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" HandleID="k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Workload="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000290b80), Attrs:map[string]string{"namespace":"default", "node":"147.182.243.214", "pod":"nfs-server-provisioner-0", "timestamp":"2025-02-13 20:16:20.614116276 +0000 UTC"}, Hostname:"147.182.243.214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.629 [INFO][3169] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.629 [INFO][3169] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.629 [INFO][3169] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '147.182.243.214' Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.635 [INFO][3169] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.644 [INFO][3169] ipam/ipam.go 372: Looking up existing affinities for host host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.687 [INFO][3169] ipam/ipam.go 489: Trying affinity for 192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.708 [INFO][3169] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.719 [INFO][3169] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.719 [INFO][3169] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.733 [INFO][3169] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.743 [INFO][3169] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.762 [INFO][3169] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.195/26] block=192.168.54.192/26 handle="k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.762 [INFO][3169] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.195/26] handle="k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" host="147.182.243.214" Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.762 [INFO][3169] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:20.801238 containerd[1470]: 2025-02-13 20:16:20.763 [INFO][3169] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.195/26] IPv6=[] ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" HandleID="k8s-pod-network.1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Workload="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:16:20.803650 containerd[1470]: 2025-02-13 20:16:20.765 [INFO][3157] cni-plugin/k8s.go 386: Populated endpoint ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"056af7a0-018a-4d6b-86fd-b73f32228d59", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:20.803650 containerd[1470]: 2025-02-13 20:16:20.765 [INFO][3157] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.195/32] ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:16:20.803650 containerd[1470]: 2025-02-13 20:16:20.765 [INFO][3157] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:16:20.803650 containerd[1470]: 2025-02-13 20:16:20.773 [INFO][3157] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:16:20.804190 containerd[1470]: 2025-02-13 20:16:20.775 [INFO][3157] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"056af7a0-018a-4d6b-86fd-b73f32228d59", ResourceVersion:"1177", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.54.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"32:f6:68:11:16:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:20.804190 containerd[1470]: 2025-02-13 20:16:20.798 [INFO][3157] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="147.182.243.214-k8s-nfs--server--provisioner--0-eth0" Feb 13 20:16:20.848842 containerd[1470]: time="2025-02-13T20:16:20.848533328Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:20.848842 containerd[1470]: time="2025-02-13T20:16:20.848642746Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:20.848842 containerd[1470]: time="2025-02-13T20:16:20.848689385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:20.850595 containerd[1470]: time="2025-02-13T20:16:20.849917037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:20.898477 systemd[1]: Started cri-containerd-1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df.scope - libcontainer container 1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df. Feb 13 20:16:20.967372 containerd[1470]: time="2025-02-13T20:16:20.967248718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:056af7a0-018a-4d6b-86fd-b73f32228d59,Namespace:default,Attempt:0,} returns sandbox id \"1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df\"" Feb 13 20:16:20.971361 containerd[1470]: time="2025-02-13T20:16:20.971034530Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 13 20:16:21.169842 kubelet[1798]: E0213 20:16:21.165889 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:22.170605 kubelet[1798]: E0213 20:16:22.170486 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:22.803807 systemd-networkd[1367]: cali60e51b789ff: Gained IPv6LL Feb 13 20:16:23.171316 kubelet[1798]: E0213 20:16:23.171200 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:23.798282 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4262615315.mount: Deactivated successfully. Feb 13 20:16:24.171582 kubelet[1798]: E0213 20:16:24.171424 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:25.172110 kubelet[1798]: E0213 20:16:25.172044 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:26.173290 kubelet[1798]: E0213 20:16:26.173187 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:26.513516 containerd[1470]: time="2025-02-13T20:16:26.513315861Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:26.517217 containerd[1470]: time="2025-02-13T20:16:26.516489068Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Feb 13 20:16:26.517405 containerd[1470]: time="2025-02-13T20:16:26.517312004Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:26.521040 containerd[1470]: time="2025-02-13T20:16:26.520750518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:26.523050 containerd[1470]: time="2025-02-13T20:16:26.522953582Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.55182711s" Feb 13 20:16:26.523899 containerd[1470]: time="2025-02-13T20:16:26.523323005Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 13 20:16:26.527029 containerd[1470]: time="2025-02-13T20:16:26.526933739Z" level=info msg="CreateContainer within sandbox \"1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 13 20:16:26.550187 containerd[1470]: time="2025-02-13T20:16:26.549865467Z" level=info msg="CreateContainer within sandbox \"1a2f75aa4742eb3d418a1d638b3b76c34cccbae68c7183bb8da0f320717b10df\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"d72207461b4e4f95f08f191af077fc4b7c719cd20a7558bd764e2bfc6c735aae\"" Feb 13 20:16:26.551156 containerd[1470]: time="2025-02-13T20:16:26.550995996Z" level=info msg="StartContainer for \"d72207461b4e4f95f08f191af077fc4b7c719cd20a7558bd764e2bfc6c735aae\"" Feb 13 20:16:26.612390 systemd[1]: run-containerd-runc-k8s.io-d72207461b4e4f95f08f191af077fc4b7c719cd20a7558bd764e2bfc6c735aae-runc.NREU68.mount: Deactivated successfully. Feb 13 20:16:26.628577 systemd[1]: Started cri-containerd-d72207461b4e4f95f08f191af077fc4b7c719cd20a7558bd764e2bfc6c735aae.scope - libcontainer container d72207461b4e4f95f08f191af077fc4b7c719cd20a7558bd764e2bfc6c735aae. Feb 13 20:16:26.670093 containerd[1470]: time="2025-02-13T20:16:26.669921533Z" level=info msg="StartContainer for \"d72207461b4e4f95f08f191af077fc4b7c719cd20a7558bd764e2bfc6c735aae\" returns successfully" Feb 13 20:16:27.174093 kubelet[1798]: E0213 20:16:27.174016 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:27.483231 kubelet[1798]: E0213 20:16:27.483024 1798 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" Feb 13 20:16:28.174285 kubelet[1798]: E0213 20:16:28.174198 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:29.175155 kubelet[1798]: E0213 20:16:29.175084 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:30.176220 kubelet[1798]: E0213 20:16:30.176149 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:31.177645 kubelet[1798]: E0213 20:16:31.177481 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:32.178693 kubelet[1798]: E0213 20:16:32.178629 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:33.179737 kubelet[1798]: E0213 20:16:33.179619 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:34.112832 kubelet[1798]: E0213 20:16:34.112767 1798 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:34.156173 containerd[1470]: time="2025-02-13T20:16:34.156121853Z" level=info msg="StopPodSandbox for \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\"" Feb 13 20:16:34.180317 kubelet[1798]: E0213 20:16:34.180251 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.207 [WARNING][3368] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-csi--node--driver--cn6j8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7ff3472-c8b7-4078-b230-5a0559383bf9", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396", Pod:"csi-node-driver-cn6j8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccd4eb57065", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.207 [INFO][3368] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.207 [INFO][3368] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" iface="eth0" netns="" Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.207 [INFO][3368] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.207 [INFO][3368] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.242 [INFO][3375] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.242 [INFO][3375] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.242 [INFO][3375] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.251 [WARNING][3375] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.251 [INFO][3375] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.254 [INFO][3375] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:34.258962 containerd[1470]: 2025-02-13 20:16:34.256 [INFO][3368] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:34.263345 containerd[1470]: time="2025-02-13T20:16:34.259883456Z" level=info msg="TearDown network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\" successfully" Feb 13 20:16:34.263345 containerd[1470]: time="2025-02-13T20:16:34.259937812Z" level=info msg="StopPodSandbox for \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\" returns successfully" Feb 13 20:16:34.318182 containerd[1470]: time="2025-02-13T20:16:34.318086524Z" level=info msg="RemovePodSandbox for \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\"" Feb 13 20:16:34.318453 containerd[1470]: time="2025-02-13T20:16:34.318419255Z" level=info msg="Forcibly stopping sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\"" Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.376 [WARNING][3395] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-csi--node--driver--cn6j8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b7ff3472-c8b7-4078-b230-5a0559383bf9", ResourceVersion:"1127", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"84cddb44f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"8c47cfaa1fcd82673739abc9534ee9f7d2a19feda00458d3312e6248d6c8b396", Pod:"csi-node-driver-cn6j8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.54.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"caliccd4eb57065", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.376 [INFO][3395] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.376 [INFO][3395] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" iface="eth0" netns="" Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.376 [INFO][3395] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.376 [INFO][3395] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.409 [INFO][3403] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.409 [INFO][3403] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.409 [INFO][3403] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.419 [WARNING][3403] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.419 [INFO][3403] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" HandleID="k8s-pod-network.163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Workload="147.182.243.214-k8s-csi--node--driver--cn6j8-eth0" Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.422 [INFO][3403] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:34.425472 containerd[1470]: 2025-02-13 20:16:34.423 [INFO][3395] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d" Feb 13 20:16:34.426427 containerd[1470]: time="2025-02-13T20:16:34.426378895Z" level=info msg="TearDown network for sandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\" successfully" Feb 13 20:16:34.467204 containerd[1470]: time="2025-02-13T20:16:34.467124023Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:34.467523 containerd[1470]: time="2025-02-13T20:16:34.467494851Z" level=info msg="RemovePodSandbox \"163eaa512ebd99053eeafac3ac9fb4eb2a5885c79247102d529a32c63e4e6b9d\" returns successfully" Feb 13 20:16:34.468455 containerd[1470]: time="2025-02-13T20:16:34.468395504Z" level=info msg="StopPodSandbox for \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\"" Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.520 [WARNING][3421] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"7106fb93-418a-4cbb-8a68-40fd69a99bfe", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734", Pod:"nginx-deployment-7fcdb87857-mwpdh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie06b2d08f01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.521 [INFO][3421] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.521 [INFO][3421] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" iface="eth0" netns="" Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.521 [INFO][3421] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.521 [INFO][3421] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.553 [INFO][3428] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.554 [INFO][3428] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.554 [INFO][3428] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.563 [WARNING][3428] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.563 [INFO][3428] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.567 [INFO][3428] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:34.570548 containerd[1470]: 2025-02-13 20:16:34.568 [INFO][3421] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:34.573257 containerd[1470]: time="2025-02-13T20:16:34.570610505Z" level=info msg="TearDown network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\" successfully" Feb 13 20:16:34.573257 containerd[1470]: time="2025-02-13T20:16:34.570644125Z" level=info msg="StopPodSandbox for \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\" returns successfully" Feb 13 20:16:34.573257 containerd[1470]: time="2025-02-13T20:16:34.571550598Z" level=info msg="RemovePodSandbox for \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\"" Feb 13 20:16:34.573257 containerd[1470]: time="2025-02-13T20:16:34.571602398Z" level=info msg="Forcibly stopping sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\"" Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.619 [WARNING][3446] cni-plugin/k8s.go 572: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0", GenerateName:"nginx-deployment-7fcdb87857-", Namespace:"default", SelfLink:"", UID:"7106fb93-418a-4cbb-8a68-40fd69a99bfe", ResourceVersion:"1143", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 15, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"7fcdb87857", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"5932c335eb04ceea93d924d1ed4c89ec8b6c3be731c1c9765c57dc6353d13734", Pod:"nginx-deployment-7fcdb87857-mwpdh", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"calie06b2d08f01", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.619 [INFO][3446] cni-plugin/k8s.go 608: Cleaning up netns ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.619 [INFO][3446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" iface="eth0" netns="" Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.620 [INFO][3446] cni-plugin/k8s.go 615: Releasing IP address(es) ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.620 [INFO][3446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.649 [INFO][3452] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.649 [INFO][3452] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.649 [INFO][3452] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.658 [WARNING][3452] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.658 [INFO][3452] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" HandleID="k8s-pod-network.bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Workload="147.182.243.214-k8s-nginx--deployment--7fcdb87857--mwpdh-eth0" Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.661 [INFO][3452] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:34.664779 containerd[1470]: 2025-02-13 20:16:34.662 [INFO][3446] cni-plugin/k8s.go 621: Teardown processing complete. ContainerID="bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61" Feb 13 20:16:34.665901 containerd[1470]: time="2025-02-13T20:16:34.664752964Z" level=info msg="TearDown network for sandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\" successfully" Feb 13 20:16:34.669020 containerd[1470]: time="2025-02-13T20:16:34.668465633Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Feb 13 20:16:34.669020 containerd[1470]: time="2025-02-13T20:16:34.668550214Z" level=info msg="RemovePodSandbox \"bbc7a6142d4a699625dff47564182de19184864b2e609232e57099e1933afc61\" returns successfully" Feb 13 20:16:35.181413 kubelet[1798]: E0213 20:16:35.181343 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:36.182248 kubelet[1798]: E0213 20:16:36.182175 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:36.188905 kubelet[1798]: I0213 20:16:36.188777 1798 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=10.633916785 podStartE2EDuration="16.188748717s" podCreationTimestamp="2025-02-13 20:16:20 +0000 UTC" firstStartedPulling="2025-02-13 20:16:20.969881764 +0000 UTC m=+47.456433041" lastFinishedPulling="2025-02-13 20:16:26.524713718 +0000 UTC m=+53.011264973" observedRunningTime="2025-02-13 20:16:27.5261959 +0000 UTC m=+54.012747177" watchObservedRunningTime="2025-02-13 20:16:36.188748717 +0000 UTC m=+62.675300153" Feb 13 20:16:36.196703 systemd[1]: Created slice kubepods-besteffort-pod376f2864_2581_4523_8090_5d417fc0ca47.slice - libcontainer container kubepods-besteffort-pod376f2864_2581_4523_8090_5d417fc0ca47.slice. Feb 13 20:16:36.252029 kubelet[1798]: I0213 20:16:36.251805 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-bbe87818-6a60-40da-9325-59cb629025cc\" (UniqueName: \"kubernetes.io/nfs/376f2864-2581-4523-8090-5d417fc0ca47-pvc-bbe87818-6a60-40da-9325-59cb629025cc\") pod \"test-pod-1\" (UID: \"376f2864-2581-4523-8090-5d417fc0ca47\") " pod="default/test-pod-1" Feb 13 20:16:36.252029 kubelet[1798]: I0213 20:16:36.251893 1798 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zt84j\" (UniqueName: \"kubernetes.io/projected/376f2864-2581-4523-8090-5d417fc0ca47-kube-api-access-zt84j\") pod \"test-pod-1\" (UID: \"376f2864-2581-4523-8090-5d417fc0ca47\") " pod="default/test-pod-1" Feb 13 20:16:36.399228 kernel: FS-Cache: Loaded Feb 13 20:16:36.492363 kernel: RPC: Registered named UNIX socket transport module. Feb 13 20:16:36.492522 kernel: RPC: Registered udp transport module. Feb 13 20:16:36.492551 kernel: RPC: Registered tcp transport module. Feb 13 20:16:36.493211 kernel: RPC: Registered tcp-with-tls transport module. Feb 13 20:16:36.494209 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 13 20:16:36.918086 kernel: NFS: Registering the id_resolver key type Feb 13 20:16:36.920159 kernel: Key type id_resolver registered Feb 13 20:16:36.922443 kernel: Key type id_legacy registered Feb 13 20:16:36.963531 nfsidmap[3473]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.1-9-0c9fce155b' Feb 13 20:16:36.970477 nfsidmap[3475]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.1-9-0c9fce155b' Feb 13 20:16:37.101416 containerd[1470]: time="2025-02-13T20:16:37.101345177Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:376f2864-2581-4523-8090-5d417fc0ca47,Namespace:default,Attempt:0,}" Feb 13 20:16:37.184122 kubelet[1798]: E0213 20:16:37.182620 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:37.273058 systemd-networkd[1367]: cali5ec59c6bf6e: Link UP Feb 13 20:16:37.273347 systemd-networkd[1367]: cali5ec59c6bf6e: Gained carrier Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.156 [INFO][3478] cni-plugin/plugin.go 325: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {147.182.243.214-k8s-test--pod--1-eth0 default 376f2864-2581-4523-8090-5d417fc0ca47 1254 0 2025-02-13 20:16:20 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 147.182.243.214 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.243.214-k8s-test--pod--1-" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.156 [INFO][3478] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.243.214-k8s-test--pod--1-eth0" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.205 [INFO][3488] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" HandleID="k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Workload="147.182.243.214-k8s-test--pod--1-eth0" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.219 [INFO][3488] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" HandleID="k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Workload="147.182.243.214-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00030fd30), Attrs:map[string]string{"namespace":"default", "node":"147.182.243.214", "pod":"test-pod-1", "timestamp":"2025-02-13 20:16:37.205341132 +0000 UTC"}, Hostname:"147.182.243.214", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.219 [INFO][3488] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.219 [INFO][3488] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.219 [INFO][3488] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '147.182.243.214' Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.222 [INFO][3488] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.228 [INFO][3488] ipam/ipam.go 372: Looking up existing affinities for host host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.236 [INFO][3488] ipam/ipam.go 489: Trying affinity for 192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.239 [INFO][3488] ipam/ipam.go 155: Attempting to load block cidr=192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.242 [INFO][3488] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.54.192/26 host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.242 [INFO][3488] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.54.192/26 handle="k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.245 [INFO][3488] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10 Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.252 [INFO][3488] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.54.192/26 handle="k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.264 [INFO][3488] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.54.196/26] block=192.168.54.192/26 handle="k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.265 [INFO][3488] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.54.196/26] handle="k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" host="147.182.243.214" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.265 [INFO][3488] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.265 [INFO][3488] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.54.196/26] IPv6=[] ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" HandleID="k8s-pod-network.b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Workload="147.182.243.214-k8s-test--pod--1-eth0" Feb 13 20:16:37.288182 containerd[1470]: 2025-02-13 20:16:37.268 [INFO][3478] cni-plugin/k8s.go 386: Populated endpoint ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.243.214-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"376f2864-2581-4523-8090-5d417fc0ca47", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:37.290475 containerd[1470]: 2025-02-13 20:16:37.268 [INFO][3478] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.54.196/32] ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.243.214-k8s-test--pod--1-eth0" Feb 13 20:16:37.290475 containerd[1470]: 2025-02-13 20:16:37.268 [INFO][3478] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.243.214-k8s-test--pod--1-eth0" Feb 13 20:16:37.290475 containerd[1470]: 2025-02-13 20:16:37.272 [INFO][3478] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.243.214-k8s-test--pod--1-eth0" Feb 13 20:16:37.290475 containerd[1470]: 2025-02-13 20:16:37.273 [INFO][3478] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.243.214-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"147.182.243.214-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"376f2864-2581-4523-8090-5d417fc0ca47", ResourceVersion:"1254", Generation:0, CreationTimestamp:time.Date(2025, time.February, 13, 20, 16, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"147.182.243.214", ContainerID:"b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.54.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"9a:54:72:f3:96:e6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Feb 13 20:16:37.290475 containerd[1470]: 2025-02-13 20:16:37.285 [INFO][3478] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="147.182.243.214-k8s-test--pod--1-eth0" Feb 13 20:16:37.328310 containerd[1470]: time="2025-02-13T20:16:37.328146246Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 13 20:16:37.328310 containerd[1470]: time="2025-02-13T20:16:37.328222456Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 13 20:16:37.328310 containerd[1470]: time="2025-02-13T20:16:37.328238527Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:37.329092 containerd[1470]: time="2025-02-13T20:16:37.328341499Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 13 20:16:37.360431 systemd[1]: Started cri-containerd-b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10.scope - libcontainer container b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10. Feb 13 20:16:37.438920 containerd[1470]: time="2025-02-13T20:16:37.438529583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:376f2864-2581-4523-8090-5d417fc0ca47,Namespace:default,Attempt:0,} returns sandbox id \"b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10\"" Feb 13 20:16:37.445051 containerd[1470]: time="2025-02-13T20:16:37.443319814Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 13 20:16:37.879171 containerd[1470]: time="2025-02-13T20:16:37.879102223Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Feb 13 20:16:37.880210 containerd[1470]: time="2025-02-13T20:16:37.880092521Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Feb 13 20:16:37.883599 containerd[1470]: time="2025-02-13T20:16:37.883531076Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:d9bc3da999da9f147f1277c7b18292486847e8f39f95fcf81d914d0c22815faf\", size \"73054371\" in 440.149891ms" Feb 13 20:16:37.883599 containerd[1470]: time="2025-02-13T20:16:37.883589255Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:fe94eb5f0c9c8d0ca277aa8cd5940f1faf5970175bf373932babc578545deda8\"" Feb 13 20:16:37.887032 containerd[1470]: time="2025-02-13T20:16:37.886915270Z" level=info msg="CreateContainer within sandbox \"b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 13 20:16:37.905444 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount500693070.mount: Deactivated successfully. Feb 13 20:16:37.911782 containerd[1470]: time="2025-02-13T20:16:37.911710605Z" level=info msg="CreateContainer within sandbox \"b560921719907c6463601487055bfbd251c788c043756f4eaaf4b91e9a92bd10\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e3fc6dd61bddfcf265993f1e772c11c6ea9c91e0ae801475db5c7f0ec2fb4be1\"" Feb 13 20:16:37.914046 containerd[1470]: time="2025-02-13T20:16:37.913066323Z" level=info msg="StartContainer for \"e3fc6dd61bddfcf265993f1e772c11c6ea9c91e0ae801475db5c7f0ec2fb4be1\"" Feb 13 20:16:37.968360 systemd[1]: Started cri-containerd-e3fc6dd61bddfcf265993f1e772c11c6ea9c91e0ae801475db5c7f0ec2fb4be1.scope - libcontainer container e3fc6dd61bddfcf265993f1e772c11c6ea9c91e0ae801475db5c7f0ec2fb4be1. Feb 13 20:16:38.008440 containerd[1470]: time="2025-02-13T20:16:38.008353640Z" level=info msg="StartContainer for \"e3fc6dd61bddfcf265993f1e772c11c6ea9c91e0ae801475db5c7f0ec2fb4be1\" returns successfully" Feb 13 20:16:38.183922 kubelet[1798]: E0213 20:16:38.183723 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:38.354451 systemd-networkd[1367]: cali5ec59c6bf6e: Gained IPv6LL Feb 13 20:16:39.184156 kubelet[1798]: E0213 20:16:39.183960 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:40.185096 kubelet[1798]: E0213 20:16:40.185012 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:41.185545 kubelet[1798]: E0213 20:16:41.185416 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:42.185895 kubelet[1798]: E0213 20:16:42.185814 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:43.187040 kubelet[1798]: E0213 20:16:43.186955 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 13 20:16:44.187559 kubelet[1798]: E0213 20:16:44.187476 1798 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"