May 17 00:23:38.011492 kernel: Linux version 6.6.90-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT_DYNAMIC Fri May 16 22:44:56 -00 2025 May 17 00:23:38.011538 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:23:38.011556 kernel: BIOS-provided physical RAM map: May 17 00:23:38.011567 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable May 17 00:23:38.011576 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved May 17 00:23:38.011587 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved May 17 00:23:38.011601 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable May 17 00:23:38.011612 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved May 17 00:23:38.011618 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved May 17 00:23:38.011629 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved May 17 00:23:38.011636 kernel: NX (Execute Disable) protection: active May 17 00:23:38.011642 kernel: APIC: Static calls initialized May 17 00:23:38.011674 kernel: SMBIOS 2.8 present. May 17 00:23:38.011686 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 May 17 00:23:38.011694 kernel: Hypervisor detected: KVM May 17 00:23:38.011705 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 May 17 00:23:38.011718 kernel: kvm-clock: using sched offset of 3288461854 cycles May 17 00:23:38.011726 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns May 17 00:23:38.011734 kernel: tsc: Detected 1995.312 MHz processor May 17 00:23:38.011741 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:23:38.011749 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:23:38.011757 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 May 17 00:23:38.011764 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs May 17 00:23:38.011771 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:23:38.011782 kernel: ACPI: Early table checksum verification disabled May 17 00:23:38.011789 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) May 17 00:23:38.011796 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:23:38.011803 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:23:38.011810 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:23:38.011817 kernel: ACPI: FACS 0x000000007FFE0000 000040 May 17 00:23:38.011824 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:23:38.011831 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:23:38.011838 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:23:38.011848 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) May 17 00:23:38.011855 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] May 17 00:23:38.011862 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] May 17 00:23:38.011869 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] May 17 00:23:38.011920 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] May 17 00:23:38.011927 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] May 17 00:23:38.011934 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] May 17 00:23:38.011949 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] May 17 00:23:38.011956 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:23:38.011964 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:23:38.011971 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] May 17 00:23:38.011978 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] May 17 00:23:38.011990 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] May 17 00:23:38.011997 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] May 17 00:23:38.012008 kernel: Zone ranges: May 17 00:23:38.012016 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:23:38.012023 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] May 17 00:23:38.012031 kernel: Normal empty May 17 00:23:38.012038 kernel: Movable zone start for each node May 17 00:23:38.012045 kernel: Early memory node ranges May 17 00:23:38.012052 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] May 17 00:23:38.012060 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] May 17 00:23:38.012067 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] May 17 00:23:38.012077 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:23:38.012084 kernel: On node 0, zone DMA: 97 pages in unavailable ranges May 17 00:23:38.012095 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges May 17 00:23:38.012102 kernel: ACPI: PM-Timer IO Port: 0x608 May 17 00:23:38.012110 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) May 17 00:23:38.012117 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 May 17 00:23:38.012125 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) May 17 00:23:38.012133 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) May 17 00:23:38.012140 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:23:38.012150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) May 17 00:23:38.012157 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) May 17 00:23:38.012164 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:23:38.012172 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 May 17 00:23:38.012180 kernel: TSC deadline timer available May 17 00:23:38.012187 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:23:38.012194 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() May 17 00:23:38.012201 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices May 17 00:23:38.012212 kernel: Booting paravirtualized kernel on KVM May 17 00:23:38.012220 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:23:38.012231 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 17 00:23:38.012238 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 17 00:23:38.012246 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 17 00:23:38.012255 kernel: pcpu-alloc: [0] 0 1 May 17 00:23:38.012262 kernel: kvm-guest: PV spinlocks disabled, no host support May 17 00:23:38.012271 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:23:38.012280 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:23:38.012290 kernel: random: crng init done May 17 00:23:38.012298 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) May 17 00:23:38.012305 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:23:38.012313 kernel: Fallback order for Node 0: 0 May 17 00:23:38.012320 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 May 17 00:23:38.012327 kernel: Policy zone: DMA32 May 17 00:23:38.012335 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:23:38.012342 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2295K rwdata, 22740K rodata, 42872K init, 2320K bss, 125148K reserved, 0K cma-reserved) May 17 00:23:38.012350 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:23:38.012360 kernel: Kernel/User page tables isolation: enabled May 17 00:23:38.012368 kernel: ftrace: allocating 37948 entries in 149 pages May 17 00:23:38.012375 kernel: ftrace: allocated 149 pages with 4 groups May 17 00:23:38.012383 kernel: Dynamic Preempt: voluntary May 17 00:23:38.012390 kernel: rcu: Preemptible hierarchical RCU implementation. May 17 00:23:38.012399 kernel: rcu: RCU event tracing is enabled. May 17 00:23:38.012407 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:23:38.012414 kernel: Trampoline variant of Tasks RCU enabled. May 17 00:23:38.012421 kernel: Rude variant of Tasks RCU enabled. May 17 00:23:38.012429 kernel: Tracing variant of Tasks RCU enabled. May 17 00:23:38.012440 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:23:38.012447 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:23:38.012455 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 May 17 00:23:38.012462 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 17 00:23:38.012473 kernel: Console: colour VGA+ 80x25 May 17 00:23:38.012480 kernel: printk: console [tty0] enabled May 17 00:23:38.012487 kernel: printk: console [ttyS0] enabled May 17 00:23:38.012495 kernel: ACPI: Core revision 20230628 May 17 00:23:38.012502 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns May 17 00:23:38.012513 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:23:38.012520 kernel: x2apic enabled May 17 00:23:38.012528 kernel: APIC: Switched APIC routing to: physical x2apic May 17 00:23:38.012535 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 May 17 00:23:38.012542 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns May 17 00:23:38.012550 kernel: Calibrating delay loop (skipped) preset value.. 3990.62 BogoMIPS (lpj=1995312) May 17 00:23:38.012557 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 May 17 00:23:38.012565 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 May 17 00:23:38.012584 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:23:38.012592 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:23:38.012600 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:23:38.012611 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls May 17 00:23:38.012619 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier May 17 00:23:38.012627 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl May 17 00:23:38.012635 kernel: MDS: Mitigation: Clear CPU buffers May 17 00:23:38.012643 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:23:38.012655 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:23:38.012667 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:23:38.012675 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:23:38.012683 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:23:38.012691 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. May 17 00:23:38.012699 kernel: Freeing SMP alternatives memory: 32K May 17 00:23:38.012707 kernel: pid_max: default: 32768 minimum: 301 May 17 00:23:38.012715 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 17 00:23:38.012724 kernel: landlock: Up and running. May 17 00:23:38.012736 kernel: SELinux: Initializing. May 17 00:23:38.012744 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:23:38.012754 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) May 17 00:23:38.012763 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) May 17 00:23:38.012771 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:23:38.012779 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:23:38.012788 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 17 00:23:38.012796 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. May 17 00:23:38.012804 kernel: signal: max sigframe size: 1776 May 17 00:23:38.012815 kernel: rcu: Hierarchical SRCU implementation. May 17 00:23:38.012823 kernel: rcu: Max phase no-delay instances is 400. May 17 00:23:38.012832 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:23:38.012840 kernel: smp: Bringing up secondary CPUs ... May 17 00:23:38.012848 kernel: smpboot: x86: Booting SMP configuration: May 17 00:23:38.012856 kernel: .... node #0, CPUs: #1 May 17 00:23:38.012864 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:23:38.014917 kernel: smpboot: Max logical packages: 1 May 17 00:23:38.014983 kernel: smpboot: Total of 2 processors activated (7981.24 BogoMIPS) May 17 00:23:38.014999 kernel: devtmpfs: initialized May 17 00:23:38.015008 kernel: x86/mm: Memory block size: 128MB May 17 00:23:38.015017 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:23:38.015026 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:23:38.015034 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:23:38.015043 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:23:38.015052 kernel: audit: initializing netlink subsys (disabled) May 17 00:23:38.015061 kernel: audit: type=2000 audit(1747441417.448:1): state=initialized audit_enabled=0 res=1 May 17 00:23:38.015070 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:23:38.015082 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:23:38.015090 kernel: cpuidle: using governor menu May 17 00:23:38.015099 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:23:38.015107 kernel: dca service started, version 1.12.1 May 17 00:23:38.015115 kernel: PCI: Using configuration type 1 for base access May 17 00:23:38.015123 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:23:38.015132 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:23:38.015140 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 17 00:23:38.015148 kernel: ACPI: Added _OSI(Module Device) May 17 00:23:38.015161 kernel: ACPI: Added _OSI(Processor Device) May 17 00:23:38.015168 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:23:38.015177 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:23:38.015185 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:23:38.015204 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 17 00:23:38.015212 kernel: ACPI: Interpreter enabled May 17 00:23:38.015220 kernel: ACPI: PM: (supports S0 S5) May 17 00:23:38.015228 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:23:38.015237 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:23:38.015248 kernel: PCI: Using E820 reservations for host bridge windows May 17 00:23:38.015256 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F May 17 00:23:38.015265 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) May 17 00:23:38.015574 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] May 17 00:23:38.015733 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] May 17 00:23:38.015965 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge May 17 00:23:38.015981 kernel: acpiphp: Slot [3] registered May 17 00:23:38.015997 kernel: acpiphp: Slot [4] registered May 17 00:23:38.016006 kernel: acpiphp: Slot [5] registered May 17 00:23:38.016014 kernel: acpiphp: Slot [6] registered May 17 00:23:38.016023 kernel: acpiphp: Slot [7] registered May 17 00:23:38.016031 kernel: acpiphp: Slot [8] registered May 17 00:23:38.016043 kernel: acpiphp: Slot [9] registered May 17 00:23:38.016061 kernel: acpiphp: Slot [10] registered May 17 00:23:38.016073 kernel: acpiphp: Slot [11] registered May 17 00:23:38.016085 kernel: acpiphp: Slot [12] registered May 17 00:23:38.016097 kernel: acpiphp: Slot [13] registered May 17 00:23:38.016113 kernel: acpiphp: Slot [14] registered May 17 00:23:38.016126 kernel: acpiphp: Slot [15] registered May 17 00:23:38.016139 kernel: acpiphp: Slot [16] registered May 17 00:23:38.016153 kernel: acpiphp: Slot [17] registered May 17 00:23:38.016167 kernel: acpiphp: Slot [18] registered May 17 00:23:38.016176 kernel: acpiphp: Slot [19] registered May 17 00:23:38.016184 kernel: acpiphp: Slot [20] registered May 17 00:23:38.016191 kernel: acpiphp: Slot [21] registered May 17 00:23:38.016200 kernel: acpiphp: Slot [22] registered May 17 00:23:38.016211 kernel: acpiphp: Slot [23] registered May 17 00:23:38.016219 kernel: acpiphp: Slot [24] registered May 17 00:23:38.016228 kernel: acpiphp: Slot [25] registered May 17 00:23:38.016237 kernel: acpiphp: Slot [26] registered May 17 00:23:38.016245 kernel: acpiphp: Slot [27] registered May 17 00:23:38.016253 kernel: acpiphp: Slot [28] registered May 17 00:23:38.016261 kernel: acpiphp: Slot [29] registered May 17 00:23:38.016270 kernel: acpiphp: Slot [30] registered May 17 00:23:38.016278 kernel: acpiphp: Slot [31] registered May 17 00:23:38.016286 kernel: PCI host bridge to bus 0000:00 May 17 00:23:38.016479 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] May 17 00:23:38.016602 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] May 17 00:23:38.016691 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] May 17 00:23:38.016775 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] May 17 00:23:38.016857 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] May 17 00:23:38.018405 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] May 17 00:23:38.018573 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 May 17 00:23:38.018691 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 May 17 00:23:38.018839 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 May 17 00:23:38.019349 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] May 17 00:23:38.019458 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] May 17 00:23:38.019555 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] May 17 00:23:38.019810 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] May 17 00:23:38.019946 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] May 17 00:23:38.020156 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 May 17 00:23:38.020264 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] May 17 00:23:38.020397 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 May 17 00:23:38.020493 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI May 17 00:23:38.020588 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB May 17 00:23:38.020709 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 May 17 00:23:38.020824 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] May 17 00:23:38.021000 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] May 17 00:23:38.021094 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] May 17 00:23:38.021208 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] May 17 00:23:38.021330 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] May 17 00:23:38.021446 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 May 17 00:23:38.021548 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] May 17 00:23:38.021682 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] May 17 00:23:38.021823 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] May 17 00:23:38.022011 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 May 17 00:23:38.022116 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] May 17 00:23:38.022211 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] May 17 00:23:38.022307 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] May 17 00:23:38.022436 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 May 17 00:23:38.022533 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] May 17 00:23:38.022630 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] May 17 00:23:38.022723 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] May 17 00:23:38.022859 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 May 17 00:23:38.023086 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] May 17 00:23:38.023192 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] May 17 00:23:38.023286 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] May 17 00:23:38.023396 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 May 17 00:23:38.023491 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] May 17 00:23:38.023585 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] May 17 00:23:38.023696 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] May 17 00:23:38.023800 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 May 17 00:23:38.023922 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] May 17 00:23:38.024052 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] May 17 00:23:38.024094 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 May 17 00:23:38.024136 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 May 17 00:23:38.024151 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 May 17 00:23:38.024164 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 May 17 00:23:38.024176 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 May 17 00:23:38.024189 kernel: iommu: Default domain type: Translated May 17 00:23:38.024210 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:23:38.024223 kernel: PCI: Using ACPI for IRQ routing May 17 00:23:38.024237 kernel: PCI: pci_cache_line_size set to 64 bytes May 17 00:23:38.024248 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] May 17 00:23:38.024261 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] May 17 00:23:38.024425 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device May 17 00:23:38.024543 kernel: pci 0000:00:02.0: vgaarb: bridge control possible May 17 00:23:38.024638 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none May 17 00:23:38.024653 kernel: vgaarb: loaded May 17 00:23:38.024662 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 May 17 00:23:38.024670 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter May 17 00:23:38.024678 kernel: clocksource: Switched to clocksource kvm-clock May 17 00:23:38.024687 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:23:38.024695 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:23:38.024704 kernel: pnp: PnP ACPI init May 17 00:23:38.024712 kernel: pnp: PnP ACPI: found 4 devices May 17 00:23:38.024721 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:23:38.024732 kernel: NET: Registered PF_INET protocol family May 17 00:23:38.024746 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) May 17 00:23:38.024760 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) May 17 00:23:38.024773 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:23:38.024785 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:23:38.024801 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) May 17 00:23:38.024814 kernel: TCP: Hash tables configured (established 16384 bind 16384) May 17 00:23:38.024825 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:23:38.024839 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) May 17 00:23:38.024855 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:23:38.024864 kernel: NET: Registered PF_XDP protocol family May 17 00:23:38.025056 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] May 17 00:23:38.025190 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] May 17 00:23:38.025275 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] May 17 00:23:38.025375 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] May 17 00:23:38.025484 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] May 17 00:23:38.025598 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release May 17 00:23:38.025761 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers May 17 00:23:38.025784 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 May 17 00:23:38.025903 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 32975 usecs May 17 00:23:38.025915 kernel: PCI: CLS 0 bytes, default 64 May 17 00:23:38.025924 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:23:38.025933 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x3985c314e25, max_idle_ns: 881590612270 ns May 17 00:23:38.025941 kernel: Initialise system trusted keyrings May 17 00:23:38.025950 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 May 17 00:23:38.025959 kernel: Key type asymmetric registered May 17 00:23:38.025973 kernel: Asymmetric key parser 'x509' registered May 17 00:23:38.025981 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 17 00:23:38.025990 kernel: io scheduler mq-deadline registered May 17 00:23:38.025999 kernel: io scheduler kyber registered May 17 00:23:38.026007 kernel: io scheduler bfq registered May 17 00:23:38.026019 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:23:38.026036 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 May 17 00:23:38.026048 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 May 17 00:23:38.026060 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 May 17 00:23:38.026078 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:23:38.026092 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:23:38.026101 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 May 17 00:23:38.026109 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 May 17 00:23:38.026118 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 May 17 00:23:38.026241 kernel: rtc_cmos 00:03: RTC can wake from S4 May 17 00:23:38.026254 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 May 17 00:23:38.026372 kernel: rtc_cmos 00:03: registered as rtc0 May 17 00:23:38.026466 kernel: rtc_cmos 00:03: setting system clock to 2025-05-17T00:23:37 UTC (1747441417) May 17 00:23:38.026551 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram May 17 00:23:38.026564 kernel: intel_pstate: CPU model not supported May 17 00:23:38.026577 kernel: NET: Registered PF_INET6 protocol family May 17 00:23:38.026591 kernel: Segment Routing with IPv6 May 17 00:23:38.026605 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:23:38.026619 kernel: NET: Registered PF_PACKET protocol family May 17 00:23:38.026628 kernel: Key type dns_resolver registered May 17 00:23:38.026643 kernel: IPI shorthand broadcast: enabled May 17 00:23:38.026657 kernel: sched_clock: Marking stable (1036004277, 149088864)->(1311334008, -126240867) May 17 00:23:38.026666 kernel: registered taskstats version 1 May 17 00:23:38.026679 kernel: Loading compiled-in X.509 certificates May 17 00:23:38.026693 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.90-flatcar: 85b8d1234ceca483cb3defc2030d93f7792663c9' May 17 00:23:38.026707 kernel: Key type .fscrypt registered May 17 00:23:38.026716 kernel: Key type fscrypt-provisioning registered May 17 00:23:38.026724 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:23:38.026732 kernel: ima: Allocated hash algorithm: sha1 May 17 00:23:38.026744 kernel: ima: No architecture policies found May 17 00:23:38.026752 kernel: clk: Disabling unused clocks May 17 00:23:38.026760 kernel: Freeing unused kernel image (initmem) memory: 42872K May 17 00:23:38.026768 kernel: Write protecting the kernel read-only data: 36864k May 17 00:23:38.026777 kernel: Freeing unused kernel image (rodata/data gap) memory: 1836K May 17 00:23:38.026804 kernel: Run /init as init process May 17 00:23:38.026815 kernel: with arguments: May 17 00:23:38.026824 kernel: /init May 17 00:23:38.026832 kernel: with environment: May 17 00:23:38.026844 kernel: HOME=/ May 17 00:23:38.026852 kernel: TERM=linux May 17 00:23:38.026866 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:23:38.026908 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:23:38.026930 systemd[1]: Detected virtualization kvm. May 17 00:23:38.026942 systemd[1]: Detected architecture x86-64. May 17 00:23:38.026959 systemd[1]: Running in initrd. May 17 00:23:38.026973 systemd[1]: No hostname configured, using default hostname. May 17 00:23:38.026991 systemd[1]: Hostname set to . May 17 00:23:38.027006 systemd[1]: Initializing machine ID from VM UUID. May 17 00:23:38.027020 systemd[1]: Queued start job for default target initrd.target. May 17 00:23:38.027034 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:23:38.027049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:23:38.027064 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 17 00:23:38.027079 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:23:38.027088 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 17 00:23:38.027100 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 17 00:23:38.027111 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 17 00:23:38.027120 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 17 00:23:38.027130 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:23:38.027139 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:23:38.027147 systemd[1]: Reached target paths.target - Path Units. May 17 00:23:38.027158 systemd[1]: Reached target slices.target - Slice Units. May 17 00:23:38.027178 systemd[1]: Reached target swap.target - Swaps. May 17 00:23:38.027193 systemd[1]: Reached target timers.target - Timer Units. May 17 00:23:38.027213 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:23:38.027229 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:23:38.027244 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:23:38.027257 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:23:38.027266 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:23:38.027276 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:23:38.027285 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:23:38.027297 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:23:38.027312 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 17 00:23:38.027327 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:23:38.027341 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 17 00:23:38.027359 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:23:38.027373 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:23:38.027388 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:23:38.027422 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:38.027437 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 17 00:23:38.027451 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:23:38.027466 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:23:38.027533 systemd-journald[182]: Collecting audit messages is disabled. May 17 00:23:38.027569 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:23:38.027589 systemd-journald[182]: Journal started May 17 00:23:38.027623 systemd-journald[182]: Runtime Journal (/run/log/journal/4b538efe88c94cef9169ac80708b9721) is 4.9M, max 39.3M, 34.4M free. May 17 00:23:38.027473 systemd-modules-load[183]: Inserted module 'overlay' May 17 00:23:38.071206 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:23:38.071253 kernel: Bridge firewalling registered May 17 00:23:38.069489 systemd-modules-load[183]: Inserted module 'br_netfilter' May 17 00:23:38.080916 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:23:38.081037 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:23:38.082857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:38.089084 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:23:38.096293 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:23:38.101183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:23:38.104367 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:23:38.109592 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:23:38.138066 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:23:38.140341 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:23:38.146978 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:23:38.156275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:23:38.157244 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:23:38.167490 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 17 00:23:38.191905 dracut-cmdline[218]: dracut-dracut-053 May 17 00:23:38.193261 dracut-cmdline[218]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=6b60288baeea1613a76a6f06a8f0e8edc178eae4857ce00eac42d48e92ed015e May 17 00:23:38.207956 systemd-resolved[217]: Positive Trust Anchors: May 17 00:23:38.208826 systemd-resolved[217]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:23:38.208867 systemd-resolved[217]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:23:38.216000 systemd-resolved[217]: Defaulting to hostname 'linux'. May 17 00:23:38.218937 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:23:38.219689 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:23:38.295933 kernel: SCSI subsystem initialized May 17 00:23:38.308926 kernel: Loading iSCSI transport class v2.0-870. May 17 00:23:38.322930 kernel: iscsi: registered transport (tcp) May 17 00:23:38.350936 kernel: iscsi: registered transport (qla4xxx) May 17 00:23:38.351054 kernel: QLogic iSCSI HBA Driver May 17 00:23:38.408290 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 17 00:23:38.417236 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 17 00:23:38.450163 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:23:38.450265 kernel: device-mapper: uevent: version 1.0.3 May 17 00:23:38.450280 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 17 00:23:38.501951 kernel: raid6: avx2x4 gen() 24452 MB/s May 17 00:23:38.518936 kernel: raid6: avx2x2 gen() 25138 MB/s May 17 00:23:38.536231 kernel: raid6: avx2x1 gen() 16618 MB/s May 17 00:23:38.536336 kernel: raid6: using algorithm avx2x2 gen() 25138 MB/s May 17 00:23:38.555951 kernel: raid6: .... xor() 17542 MB/s, rmw enabled May 17 00:23:38.556070 kernel: raid6: using avx2x2 recovery algorithm May 17 00:23:38.581948 kernel: xor: automatically using best checksumming function avx May 17 00:23:38.759975 kernel: Btrfs loaded, zoned=no, fsverity=no May 17 00:23:38.777783 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 17 00:23:38.785359 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:23:38.813569 systemd-udevd[401]: Using default interface naming scheme 'v255'. May 17 00:23:38.819719 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:23:38.831130 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 17 00:23:38.852313 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation May 17 00:23:38.898522 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:23:38.915268 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:23:38.984073 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:23:38.993178 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 17 00:23:39.022985 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 17 00:23:39.024810 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:23:39.026147 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:23:39.026806 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:23:39.039345 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 17 00:23:39.061323 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 17 00:23:39.103013 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues May 17 00:23:39.109908 kernel: scsi host0: Virtio SCSI HBA May 17 00:23:39.125013 kernel: libata version 3.00 loaded. May 17 00:23:39.139998 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) May 17 00:23:39.148016 kernel: ata_piix 0000:00:01.1: version 2.13 May 17 00:23:39.154912 kernel: ACPI: bus type USB registered May 17 00:23:39.156903 kernel: scsi host1: ata_piix May 17 00:23:39.162564 kernel: scsi host2: ata_piix May 17 00:23:39.163046 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 May 17 00:23:39.163073 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 May 17 00:23:39.165928 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:23:39.168543 kernel: usbcore: registered new interface driver usbfs May 17 00:23:39.168624 kernel: usbcore: registered new interface driver hub May 17 00:23:39.176913 kernel: usbcore: registered new device driver usb May 17 00:23:39.177022 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. May 17 00:23:39.177046 kernel: GPT:9289727 != 125829119 May 17 00:23:39.177064 kernel: GPT:Alternate GPT header not at the end of the disk. May 17 00:23:39.177098 kernel: GPT:9289727 != 125829119 May 17 00:23:39.177116 kernel: GPT: Use GNU Parted to correct GPT errors. May 17 00:23:39.177133 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:23:39.178120 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:23:39.178612 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:23:39.185401 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues May 17 00:23:39.185691 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) May 17 00:23:39.185237 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:23:39.185836 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:23:39.186089 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:39.187211 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:39.207373 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:39.271156 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:39.281205 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 17 00:23:39.304483 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:23:39.356068 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:23:39.362916 kernel: AES CTR mode by8 optimization enabled May 17 00:23:39.387981 kernel: BTRFS: device fsid 7f88d479-6686-439c-8052-b96f0a9d77bc devid 1 transid 38 /dev/vda3 scanned by (udev-worker) (447) May 17 00:23:39.392911 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (456) May 17 00:23:39.399349 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. May 17 00:23:39.424981 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. May 17 00:23:39.425826 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. May 17 00:23:39.434039 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:23:39.439811 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. May 17 00:23:39.449210 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 17 00:23:39.456989 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller May 17 00:23:39.460260 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 May 17 00:23:39.460690 kernel: uhci_hcd 0000:00:01.2: detected 2 ports May 17 00:23:39.460870 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 May 17 00:23:39.462459 kernel: hub 1-0:1.0: USB hub found May 17 00:23:39.464093 kernel: hub 1-0:1.0: 2 ports detected May 17 00:23:39.465845 disk-uuid[550]: Primary Header is updated. May 17 00:23:39.465845 disk-uuid[550]: Secondary Entries is updated. May 17 00:23:39.465845 disk-uuid[550]: Secondary Header is updated. May 17 00:23:39.470908 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:23:39.477923 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:23:40.480984 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 May 17 00:23:40.482162 disk-uuid[551]: The operation has completed successfully. May 17 00:23:40.537296 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:23:40.537425 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 17 00:23:40.542345 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 17 00:23:40.564770 sh[562]: Success May 17 00:23:40.582255 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:23:40.659832 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 17 00:23:40.670310 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 17 00:23:40.674309 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 17 00:23:40.709929 kernel: BTRFS info (device dm-0): first mount of filesystem 7f88d479-6686-439c-8052-b96f0a9d77bc May 17 00:23:40.710031 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 17 00:23:40.710062 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 17 00:23:40.711934 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 17 00:23:40.713110 kernel: BTRFS info (device dm-0): using free space tree May 17 00:23:40.722339 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 17 00:23:40.723838 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 17 00:23:40.731167 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 17 00:23:40.733671 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 17 00:23:40.747257 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:40.750115 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:23:40.750193 kernel: BTRFS info (device vda6): using free space tree May 17 00:23:40.754910 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:23:40.768645 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:23:40.771785 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:40.780630 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 17 00:23:40.789235 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 17 00:23:40.885068 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:23:40.909679 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:23:40.944971 systemd-networkd[748]: lo: Link UP May 17 00:23:40.944982 systemd-networkd[748]: lo: Gained carrier May 17 00:23:40.949785 systemd-networkd[748]: Enumeration completed May 17 00:23:40.949990 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:23:40.950352 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 17 00:23:40.950356 systemd-networkd[748]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. May 17 00:23:40.956263 ignition[654]: Ignition 2.19.0 May 17 00:23:40.951267 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:23:40.956269 ignition[654]: Stage: fetch-offline May 17 00:23:40.951271 systemd-networkd[748]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:23:40.956311 ignition[654]: no configs at "/usr/lib/ignition/base.d" May 17 00:23:40.952208 systemd-networkd[748]: eth0: Link UP May 17 00:23:40.956322 ignition[654]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:23:40.952213 systemd-networkd[748]: eth0: Gained carrier May 17 00:23:40.956438 ignition[654]: parsed url from cmdline: "" May 17 00:23:40.952223 systemd-networkd[748]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. May 17 00:23:40.956443 ignition[654]: no config URL provided May 17 00:23:40.952759 systemd[1]: Reached target network.target - Network. May 17 00:23:40.956449 ignition[654]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:23:40.955376 systemd-networkd[748]: eth1: Link UP May 17 00:23:40.956460 ignition[654]: no config at "/usr/lib/ignition/user.ign" May 17 00:23:40.955381 systemd-networkd[748]: eth1: Gained carrier May 17 00:23:40.956467 ignition[654]: failed to fetch config: resource requires networking May 17 00:23:40.955394 systemd-networkd[748]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 17 00:23:40.956680 ignition[654]: Ignition finished successfully May 17 00:23:40.958665 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:23:40.967200 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 17 00:23:40.971378 systemd-networkd[748]: eth0: DHCPv4 address 143.198.108.0/20, gateway 143.198.96.1 acquired from 169.254.169.253 May 17 00:23:40.976998 systemd-networkd[748]: eth1: DHCPv4 address 10.124.0.30/20 acquired from 169.254.169.253 May 17 00:23:40.997544 ignition[756]: Ignition 2.19.0 May 17 00:23:40.997558 ignition[756]: Stage: fetch May 17 00:23:40.997777 ignition[756]: no configs at "/usr/lib/ignition/base.d" May 17 00:23:40.997789 ignition[756]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:23:40.999011 ignition[756]: parsed url from cmdline: "" May 17 00:23:40.999017 ignition[756]: no config URL provided May 17 00:23:40.999028 ignition[756]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:23:40.999048 ignition[756]: no config at "/usr/lib/ignition/user.ign" May 17 00:23:40.999074 ignition[756]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 May 17 00:23:41.031120 ignition[756]: GET result: OK May 17 00:23:41.031261 ignition[756]: parsing config with SHA512: c47d09ea77ea4542181b889b03458634e9fa34cefb6695d217b3afc8463e599833d1199d062c79ef2f132ed1c132e6dfb1e15d8409c89afbdf0714f5d7add37e May 17 00:23:41.036188 unknown[756]: fetched base config from "system" May 17 00:23:41.036209 unknown[756]: fetched base config from "system" May 17 00:23:41.036680 ignition[756]: fetch: fetch complete May 17 00:23:41.036218 unknown[756]: fetched user config from "digitalocean" May 17 00:23:41.036688 ignition[756]: fetch: fetch passed May 17 00:23:41.039066 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 17 00:23:41.036764 ignition[756]: Ignition finished successfully May 17 00:23:41.054131 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 17 00:23:41.078747 ignition[764]: Ignition 2.19.0 May 17 00:23:41.078766 ignition[764]: Stage: kargs May 17 00:23:41.081107 ignition[764]: no configs at "/usr/lib/ignition/base.d" May 17 00:23:41.081139 ignition[764]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:23:41.082070 ignition[764]: kargs: kargs passed May 17 00:23:41.082133 ignition[764]: Ignition finished successfully May 17 00:23:41.085543 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 17 00:23:41.094369 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 17 00:23:41.112930 ignition[771]: Ignition 2.19.0 May 17 00:23:41.113920 ignition[771]: Stage: disks May 17 00:23:41.114195 ignition[771]: no configs at "/usr/lib/ignition/base.d" May 17 00:23:41.114209 ignition[771]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:23:41.115312 ignition[771]: disks: disks passed May 17 00:23:41.118037 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 17 00:23:41.115407 ignition[771]: Ignition finished successfully May 17 00:23:41.123836 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 17 00:23:41.125105 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:23:41.126439 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:23:41.127719 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:23:41.128783 systemd[1]: Reached target basic.target - Basic System. May 17 00:23:41.137151 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 17 00:23:41.155757 systemd-fsck[779]: ROOT: clean, 14/553520 files, 52654/553472 blocks May 17 00:23:41.159189 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 17 00:23:41.166157 systemd[1]: Mounting sysroot.mount - /sysroot... May 17 00:23:41.285902 kernel: EXT4-fs (vda9): mounted filesystem 278698a4-82b6-49b4-b6df-f7999ed4e35e r/w with ordered data mode. Quota mode: none. May 17 00:23:41.286506 systemd[1]: Mounted sysroot.mount - /sysroot. May 17 00:23:41.287795 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 17 00:23:41.297071 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:23:41.300733 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 17 00:23:41.305142 systemd[1]: Starting flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent... May 17 00:23:41.317493 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 17 00:23:41.327537 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (787) May 17 00:23:41.327583 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:41.327595 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:23:41.327606 kernel: BTRFS info (device vda6): using free space tree May 17 00:23:41.327618 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:23:41.318457 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:23:41.318498 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:23:41.330712 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 17 00:23:41.345500 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 17 00:23:41.350271 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:23:41.420452 coreos-metadata[789]: May 17 00:23:41.419 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:23:41.422386 coreos-metadata[790]: May 17 00:23:41.420 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:23:41.427190 initrd-setup-root[817]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:23:41.433105 initrd-setup-root[824]: cut: /sysroot/etc/group: No such file or directory May 17 00:23:41.436017 coreos-metadata[789]: May 17 00:23:41.434 INFO Fetch successful May 17 00:23:41.438130 coreos-metadata[790]: May 17 00:23:41.434 INFO Fetch successful May 17 00:23:41.441227 coreos-metadata[790]: May 17 00:23:41.441 INFO wrote hostname ci-4081.3.3-n-d1569f5c4a to /sysroot/etc/hostname May 17 00:23:41.443243 systemd[1]: flatcar-digitalocean-network.service: Deactivated successfully. May 17 00:23:41.443380 systemd[1]: Finished flatcar-digitalocean-network.service - Flatcar DigitalOcean Network Agent. May 17 00:23:41.444431 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:23:41.449456 initrd-setup-root[832]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:23:41.454801 initrd-setup-root[840]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:23:41.573492 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 17 00:23:41.581181 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 17 00:23:41.586049 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 17 00:23:41.596943 kernel: BTRFS info (device vda6): last unmount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:41.628512 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 17 00:23:41.633451 ignition[908]: INFO : Ignition 2.19.0 May 17 00:23:41.633451 ignition[908]: INFO : Stage: mount May 17 00:23:41.635325 ignition[908]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:23:41.635325 ignition[908]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:23:41.635325 ignition[908]: INFO : mount: mount passed May 17 00:23:41.635325 ignition[908]: INFO : Ignition finished successfully May 17 00:23:41.635928 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 17 00:23:41.649320 systemd[1]: Starting ignition-files.service - Ignition (files)... May 17 00:23:41.706694 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 17 00:23:41.715269 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 17 00:23:41.724933 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/vda6 scanned by mount (920) May 17 00:23:41.728703 kernel: BTRFS info (device vda6): first mount of filesystem a013fe34-315a-4c90-9ca1-aace1df6c4ac May 17 00:23:41.728788 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:23:41.728818 kernel: BTRFS info (device vda6): using free space tree May 17 00:23:41.734923 kernel: BTRFS info (device vda6): auto enabling async discard May 17 00:23:41.735855 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 17 00:23:41.773220 ignition[937]: INFO : Ignition 2.19.0 May 17 00:23:41.773220 ignition[937]: INFO : Stage: files May 17 00:23:41.774735 ignition[937]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:23:41.774735 ignition[937]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:23:41.776669 ignition[937]: DEBUG : files: compiled without relabeling support, skipping May 17 00:23:41.777728 ignition[937]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:23:41.777728 ignition[937]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:23:41.781594 ignition[937]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:23:41.782531 ignition[937]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:23:41.783968 ignition[937]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:23:41.783725 unknown[937]: wrote ssh authorized keys file for user: core May 17 00:23:41.786865 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:23:41.787974 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" May 17 00:23:41.787974 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:23:41.787974 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:23:41.787974 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:23:41.792265 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:23:41.792265 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:23:41.792265 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:23:41.792265 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:23:41.792265 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:23:42.000204 systemd-networkd[748]: eth0: Gained IPv6LL May 17 00:23:42.477598 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK May 17 00:23:42.577122 systemd-networkd[748]: eth1: Gained IPv6LL May 17 00:23:42.791945 ignition[937]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:23:42.791945 ignition[937]: INFO : files: op(8): [started] processing unit "containerd.service" May 17 00:23:42.795256 ignition[937]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:23:42.795256 ignition[937]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" May 17 00:23:42.795256 ignition[937]: INFO : files: op(8): [finished] processing unit "containerd.service" May 17 00:23:42.795256 ignition[937]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:23:42.795256 ignition[937]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:23:42.795256 ignition[937]: INFO : files: files passed May 17 00:23:42.795256 ignition[937]: INFO : Ignition finished successfully May 17 00:23:42.795487 systemd[1]: Finished ignition-files.service - Ignition (files). May 17 00:23:42.804263 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 17 00:23:42.811139 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 17 00:23:42.813467 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:23:42.813590 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 17 00:23:42.829513 initrd-setup-root-after-ignition[965]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:23:42.829513 initrd-setup-root-after-ignition[965]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 17 00:23:42.833478 initrd-setup-root-after-ignition[969]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:23:42.836411 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:23:42.837989 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 17 00:23:42.844190 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 17 00:23:42.894955 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:23:42.895138 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 17 00:23:42.896673 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 17 00:23:42.897656 systemd[1]: Reached target initrd.target - Initrd Default Target. May 17 00:23:42.898824 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 17 00:23:42.905227 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 17 00:23:42.929555 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:23:42.935299 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 17 00:23:42.961044 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 17 00:23:42.962816 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:23:42.964552 systemd[1]: Stopped target timers.target - Timer Units. May 17 00:23:42.965832 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:23:42.966071 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 17 00:23:42.967941 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 17 00:23:42.968533 systemd[1]: Stopped target basic.target - Basic System. May 17 00:23:42.969626 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 17 00:23:42.970629 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 17 00:23:42.972077 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 17 00:23:42.973339 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 17 00:23:42.974624 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 17 00:23:42.976118 systemd[1]: Stopped target sysinit.target - System Initialization. May 17 00:23:42.977572 systemd[1]: Stopped target local-fs.target - Local File Systems. May 17 00:23:42.978976 systemd[1]: Stopped target swap.target - Swaps. May 17 00:23:42.980138 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:23:42.980360 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 17 00:23:42.981928 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 17 00:23:42.983602 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:23:42.984990 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 17 00:23:42.985158 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:23:42.986680 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:23:42.986910 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 17 00:23:42.988681 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:23:42.988920 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 17 00:23:42.990288 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:23:42.990465 systemd[1]: Stopped ignition-files.service - Ignition (files). May 17 00:23:42.991482 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:23:42.991734 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 17 00:23:43.006019 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 17 00:23:43.006683 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:23:43.006982 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:23:43.011227 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 17 00:23:43.012542 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:23:43.012785 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:23:43.014094 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:23:43.014210 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 17 00:23:43.021667 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:23:43.021786 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 17 00:23:43.032909 ignition[989]: INFO : Ignition 2.19.0 May 17 00:23:43.032909 ignition[989]: INFO : Stage: umount May 17 00:23:43.032909 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" May 17 00:23:43.032909 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" May 17 00:23:43.038845 ignition[989]: INFO : umount: umount passed May 17 00:23:43.038845 ignition[989]: INFO : Ignition finished successfully May 17 00:23:43.041480 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:23:43.041619 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 17 00:23:43.044799 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:23:43.044869 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 17 00:23:43.046858 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:23:43.046953 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 17 00:23:43.048050 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:23:43.048117 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 17 00:23:43.049519 systemd[1]: Stopped target network.target - Network. May 17 00:23:43.050820 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:23:43.064048 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 17 00:23:43.071225 systemd[1]: Stopped target paths.target - Path Units. May 17 00:23:43.071946 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:23:43.078048 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:23:43.089157 systemd[1]: Stopped target slices.target - Slice Units. May 17 00:23:43.089700 systemd[1]: Stopped target sockets.target - Socket Units. May 17 00:23:43.095842 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:23:43.095975 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 17 00:23:43.096913 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:23:43.096981 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 17 00:23:43.098150 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:23:43.098247 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 17 00:23:43.099465 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 17 00:23:43.099553 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 17 00:23:43.120610 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 17 00:23:43.135160 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 17 00:23:43.139602 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:23:43.141262 systemd-networkd[748]: eth0: DHCPv6 lease lost May 17 00:23:43.141685 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:23:43.141805 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 17 00:23:43.143053 systemd-networkd[748]: eth1: DHCPv6 lease lost May 17 00:23:43.144510 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:23:43.144636 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 17 00:23:43.147202 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:23:43.147291 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 17 00:23:43.150583 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:23:43.150695 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 17 00:23:43.157127 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 17 00:23:43.158194 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:23:43.158296 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 17 00:23:43.160864 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:23:43.166398 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:23:43.166533 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 17 00:23:43.175700 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:23:43.176729 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:23:43.185186 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:23:43.185365 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 17 00:23:43.186721 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:23:43.186827 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:23:43.188105 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:23:43.188197 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 17 00:23:43.190171 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:23:43.190261 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 17 00:23:43.191561 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:23:43.191681 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 17 00:23:43.199376 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 17 00:23:43.202351 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:23:43.202496 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 17 00:23:43.204025 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:23:43.204141 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 17 00:23:43.206143 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:23:43.206238 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:23:43.207554 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 17 00:23:43.207624 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:23:43.209078 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:23:43.209177 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:43.217579 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:23:43.217711 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 17 00:23:43.221513 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:23:43.221698 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 17 00:23:43.224319 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 17 00:23:43.234280 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 17 00:23:43.245189 systemd[1]: Switching root. May 17 00:23:43.320720 systemd-journald[182]: Journal stopped May 17 00:23:44.647415 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). May 17 00:23:44.647502 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:23:44.647519 kernel: SELinux: policy capability open_perms=1 May 17 00:23:44.647535 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:23:44.647551 kernel: SELinux: policy capability always_check_network=0 May 17 00:23:44.647562 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:23:44.647579 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:23:44.647591 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:23:44.647601 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:23:44.647634 kernel: audit: type=1403 audit(1747441423.546:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:23:44.647653 systemd[1]: Successfully loaded SELinux policy in 46.958ms. May 17 00:23:44.647675 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 12.631ms. May 17 00:23:44.647689 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) May 17 00:23:44.647703 systemd[1]: Detected virtualization kvm. May 17 00:23:44.647716 systemd[1]: Detected architecture x86-64. May 17 00:23:44.647730 systemd[1]: Detected first boot. May 17 00:23:44.647742 systemd[1]: Hostname set to . May 17 00:23:44.647754 systemd[1]: Initializing machine ID from VM UUID. May 17 00:23:44.647772 zram_generator::config[1048]: No configuration found. May 17 00:23:44.647792 systemd[1]: Populated /etc with preset unit settings. May 17 00:23:44.647810 systemd[1]: Queued start job for default target multi-user.target. May 17 00:23:44.647830 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. May 17 00:23:44.647847 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 17 00:23:44.647863 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 17 00:23:44.647899 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 17 00:23:44.647913 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 17 00:23:44.647925 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 17 00:23:44.647937 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 17 00:23:44.647949 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 17 00:23:44.647967 systemd[1]: Created slice user.slice - User and Session Slice. May 17 00:23:44.647979 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 17 00:23:44.647995 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 17 00:23:44.648010 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 17 00:23:44.648029 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 17 00:23:44.648049 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 17 00:23:44.648073 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 17 00:23:44.648092 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 17 00:23:44.648112 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 17 00:23:44.648132 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 17 00:23:44.648146 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 17 00:23:44.648162 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 17 00:23:44.648175 systemd[1]: Reached target slices.target - Slice Units. May 17 00:23:44.648186 systemd[1]: Reached target swap.target - Swaps. May 17 00:23:44.648198 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 17 00:23:44.648211 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 17 00:23:44.648223 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 17 00:23:44.648235 systemd[1]: Listening on systemd-journald.socket - Journal Socket. May 17 00:23:44.648250 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 17 00:23:44.648262 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 17 00:23:44.648274 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 17 00:23:44.648285 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 17 00:23:44.648297 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 17 00:23:44.648309 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 17 00:23:44.648322 systemd[1]: Mounting media.mount - External Media Directory... May 17 00:23:44.648334 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:44.648346 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 17 00:23:44.648360 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 17 00:23:44.648373 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 17 00:23:44.648385 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 17 00:23:44.648396 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:44.648408 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 17 00:23:44.648419 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 17 00:23:44.648450 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:23:44.648461 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:23:44.648473 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:23:44.648488 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 17 00:23:44.648501 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:23:44.648513 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:23:44.648525 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. May 17 00:23:44.648538 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) May 17 00:23:44.648550 systemd[1]: Starting systemd-journald.service - Journal Service... May 17 00:23:44.648561 kernel: loop: module loaded May 17 00:23:44.648573 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 17 00:23:44.648587 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 17 00:23:44.648599 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 17 00:23:44.648610 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 17 00:23:44.648623 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:44.648634 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 17 00:23:44.648646 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 17 00:23:44.648657 systemd[1]: Mounted media.mount - External Media Directory. May 17 00:23:44.648669 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 17 00:23:44.648683 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 17 00:23:44.648696 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 17 00:23:44.648707 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 17 00:23:44.648719 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:23:44.648767 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 17 00:23:44.648787 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:23:44.648803 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:23:44.648822 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:23:44.648848 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:23:44.648866 kernel: fuse: init (API version 7.39) May 17 00:23:44.654974 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:23:44.655019 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:23:44.655033 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 17 00:23:44.655047 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 17 00:23:44.655069 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 17 00:23:44.655084 systemd[1]: Reached target network-pre.target - Preparation for Network. May 17 00:23:44.655097 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 17 00:23:44.655160 systemd-journald[1135]: Collecting audit messages is disabled. May 17 00:23:44.655211 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:23:44.655231 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 17 00:23:44.655244 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:23:44.655258 systemd-journald[1135]: Journal started May 17 00:23:44.655282 systemd-journald[1135]: Runtime Journal (/run/log/journal/4b538efe88c94cef9169ac80708b9721) is 4.9M, max 39.3M, 34.4M free. May 17 00:23:44.660503 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 17 00:23:44.670915 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:23:44.682818 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 17 00:23:44.696673 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 17 00:23:44.696810 kernel: ACPI: bus type drm_connector registered May 17 00:23:44.713927 systemd[1]: Started systemd-journald.service - Journal Service. May 17 00:23:44.717450 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:23:44.717690 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:23:44.719448 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:23:44.722154 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 17 00:23:44.724200 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 17 00:23:44.775559 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 17 00:23:44.789402 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 17 00:23:44.799024 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 17 00:23:44.807408 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 17 00:23:44.809575 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 17 00:23:44.814703 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 17 00:23:44.823527 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 17 00:23:44.835921 systemd-journald[1135]: Time spent on flushing to /var/log/journal/4b538efe88c94cef9169ac80708b9721 is 37.484ms for 960 entries. May 17 00:23:44.835921 systemd-journald[1135]: System Journal (/var/log/journal/4b538efe88c94cef9169ac80708b9721) is 8.0M, max 195.6M, 187.6M free. May 17 00:23:44.888008 systemd-journald[1135]: Received client request to flush runtime journal. May 17 00:23:44.849189 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. May 17 00:23:44.849204 systemd-tmpfiles[1166]: ACLs are not supported, ignoring. May 17 00:23:44.863544 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 17 00:23:44.867678 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 17 00:23:44.881401 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 17 00:23:44.886484 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 17 00:23:44.895012 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 17 00:23:44.930089 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 17 00:23:44.945183 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 17 00:23:44.950332 udevadm[1208]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:23:44.984022 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. May 17 00:23:44.984438 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. May 17 00:23:44.992933 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 17 00:23:45.492366 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 17 00:23:45.509480 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 17 00:23:45.545786 systemd-udevd[1220]: Using default interface naming scheme 'v255'. May 17 00:23:45.573617 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 17 00:23:45.581965 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 17 00:23:45.615424 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 17 00:23:45.694941 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. May 17 00:23:45.710104 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 17 00:23:45.774236 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:45.774434 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:45.783174 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:23:45.796716 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:23:45.803145 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:23:45.804067 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:23:45.804148 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:23:45.804219 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:45.821607 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:23:45.821958 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:23:45.827656 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:23:45.828106 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:23:45.842439 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:23:45.846061 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:23:45.847816 systemd-networkd[1224]: lo: Link UP May 17 00:23:45.847833 systemd-networkd[1224]: lo: Gained carrier May 17 00:23:45.848356 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:23:45.853073 systemd-networkd[1224]: Enumeration completed May 17 00:23:45.853763 systemd-networkd[1224]: eth0: Configuring with /run/systemd/network/10-de:e0:ae:e9:21:a5.network. May 17 00:23:45.854117 systemd[1]: Started systemd-networkd.service - Network Configuration. May 17 00:23:45.854776 systemd-networkd[1224]: eth1: Configuring with /run/systemd/network/10-ce:45:a8:4d:d4:a2.network. May 17 00:23:45.856040 systemd-networkd[1224]: eth0: Link UP May 17 00:23:45.856050 systemd-networkd[1224]: eth0: Gained carrier May 17 00:23:45.859806 systemd-networkd[1224]: eth1: Link UP May 17 00:23:45.859994 systemd-networkd[1224]: eth1: Gained carrier May 17 00:23:45.862915 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1222) May 17 00:23:45.896199 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 17 00:23:45.898492 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:23:45.915915 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 May 17 00:23:45.920962 kernel: ACPI: button: Power Button [PWRF] May 17 00:23:45.980060 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 May 17 00:23:46.019947 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 May 17 00:23:46.033260 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 May 17 00:23:46.033395 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console May 17 00:23:46.047631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:46.048568 kernel: Console: switching to colour dummy device 80x25 May 17 00:23:46.052279 kernel: [drm] features: -virgl +edid -resource_blob -host_visible May 17 00:23:46.052390 kernel: [drm] features: -context_init May 17 00:23:46.053320 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:23:46.067923 kernel: [drm] number of scanouts: 1 May 17 00:23:46.077132 kernel: [drm] number of cap sets: 0 May 17 00:23:46.079512 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 May 17 00:23:46.093249 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:23:46.093540 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:46.111639 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device May 17 00:23:46.111797 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:23:46.111979 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:46.121816 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device May 17 00:23:46.131372 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. May 17 00:23:46.155252 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:23:46.155555 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:46.210578 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 17 00:23:46.303978 kernel: EDAC MC: Ver: 3.0.0 May 17 00:23:46.304063 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 17 00:23:46.347713 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 17 00:23:46.356350 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 17 00:23:46.374968 lvm[1287]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:23:46.404407 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 17 00:23:46.405617 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 17 00:23:46.422310 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 17 00:23:46.428745 lvm[1290]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:23:46.463397 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 17 00:23:46.464914 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 17 00:23:46.472071 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... May 17 00:23:46.474648 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:23:46.475211 systemd[1]: Reached target machines.target - Containers. May 17 00:23:46.477445 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 17 00:23:46.495134 kernel: ISO 9660 Extensions: RRIP_1991A May 17 00:23:46.494777 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. May 17 00:23:46.499067 systemd[1]: Reached target local-fs.target - Local File Systems. May 17 00:23:46.501557 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). May 17 00:23:46.509403 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 17 00:23:46.512085 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 17 00:23:46.512968 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:23:46.535272 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... May 17 00:23:46.550232 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 17 00:23:46.553332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 17 00:23:46.564383 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 17 00:23:46.580323 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:23:46.581671 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. May 17 00:23:46.594921 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:23:46.617355 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:23:46.643149 kernel: loop1: detected capacity change from 0 to 8 May 17 00:23:46.668034 kernel: loop2: detected capacity change from 0 to 142488 May 17 00:23:46.718969 kernel: loop3: detected capacity change from 0 to 140768 May 17 00:23:46.779016 kernel: loop4: detected capacity change from 0 to 221472 May 17 00:23:46.811081 kernel: loop5: detected capacity change from 0 to 8 May 17 00:23:46.817084 kernel: loop6: detected capacity change from 0 to 142488 May 17 00:23:46.839464 kernel: loop7: detected capacity change from 0 to 140768 May 17 00:23:46.855273 (sd-merge)[1315]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. May 17 00:23:46.856826 (sd-merge)[1315]: Merged extensions into '/usr'. May 17 00:23:46.863986 systemd[1]: Reloading requested from client PID 1304 ('systemd-sysext') (unit systemd-sysext.service)... May 17 00:23:46.864482 systemd[1]: Reloading... May 17 00:23:46.989591 zram_generator::config[1343]: No configuration found. May 17 00:23:47.194209 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:47.196549 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:23:47.264482 systemd[1]: Reloading finished in 399 ms. May 17 00:23:47.283499 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 17 00:23:47.287429 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 17 00:23:47.300341 systemd[1]: Starting ensure-sysext.service... May 17 00:23:47.313091 systemd-networkd[1224]: eth1: Gained IPv6LL May 17 00:23:47.316346 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 17 00:23:47.326085 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 17 00:23:47.337294 systemd[1]: Reloading requested from client PID 1394 ('systemctl') (unit ensure-sysext.service)... May 17 00:23:47.337322 systemd[1]: Reloading... May 17 00:23:47.363691 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:23:47.364096 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 17 00:23:47.365808 systemd-tmpfiles[1395]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:23:47.366256 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. May 17 00:23:47.366409 systemd-tmpfiles[1395]: ACLs are not supported, ignoring. May 17 00:23:47.371954 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:23:47.372161 systemd-tmpfiles[1395]: Skipping /boot May 17 00:23:47.390814 systemd-tmpfiles[1395]: Detected autofs mount point /boot during canonicalization of boot. May 17 00:23:47.391149 systemd-tmpfiles[1395]: Skipping /boot May 17 00:23:47.436630 zram_generator::config[1426]: No configuration found. May 17 00:23:47.441186 systemd-networkd[1224]: eth0: Gained IPv6LL May 17 00:23:47.601483 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:47.679318 systemd[1]: Reloading finished in 341 ms. May 17 00:23:47.701624 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 17 00:23:47.730372 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:23:47.740191 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 17 00:23:47.751360 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 17 00:23:47.769449 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 17 00:23:47.781248 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 17 00:23:47.811561 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:47.812896 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:47.817010 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:23:47.830164 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:23:47.843381 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:23:47.848113 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:23:47.848351 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:47.863730 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 17 00:23:47.883519 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:23:47.883792 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:23:47.890782 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:47.894492 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:47.894808 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:23:47.926316 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 17 00:23:47.926986 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:47.929232 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 17 00:23:47.934080 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 17 00:23:47.935465 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:23:47.935720 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:23:47.940258 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:23:47.940527 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:23:47.963826 systemd-resolved[1484]: Positive Trust Anchors: May 17 00:23:47.964268 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:47.964559 systemd-resolved[1484]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:23:47.964568 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 17 00:23:47.964603 systemd-resolved[1484]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 17 00:23:47.967166 augenrules[1511]: No rules May 17 00:23:47.971570 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 17 00:23:47.973451 systemd-resolved[1484]: Using system hostname 'ci-4081.3.3-n-d1569f5c4a'. May 17 00:23:47.991212 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 17 00:23:48.003598 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 17 00:23:48.014524 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 17 00:23:48.015513 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 17 00:23:48.015750 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:23:48.015860 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:23:48.021432 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 17 00:23:48.026612 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:23:48.029635 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 17 00:23:48.031454 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:23:48.031827 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 17 00:23:48.034624 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:23:48.035260 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 17 00:23:48.037865 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:23:48.039103 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 17 00:23:48.042591 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:23:48.042973 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 17 00:23:48.051557 systemd[1]: Finished ensure-sysext.service. May 17 00:23:48.061635 systemd[1]: Reached target network.target - Network. May 17 00:23:48.064322 systemd[1]: Reached target network-online.target - Network is Online. May 17 00:23:48.064978 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 17 00:23:48.067208 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:23:48.067338 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 17 00:23:48.082215 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... May 17 00:23:48.144094 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. May 17 00:23:48.145261 systemd[1]: Reached target sysinit.target - System Initialization. May 17 00:23:48.146956 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 17 00:23:48.148772 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 17 00:23:48.149361 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 17 00:23:48.149842 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:23:49.266790 systemd-timesyncd[1538]: Contacted time server 129.250.35.251:123 (0.flatcar.pool.ntp.org). May 17 00:23:49.266866 systemd-timesyncd[1538]: Initial clock synchronization to Sat 2025-05-17 00:23:49.266607 UTC. May 17 00:23:49.266892 systemd[1]: Reached target paths.target - Path Units. May 17 00:23:49.266937 systemd-resolved[1484]: Clock change detected. Flushing caches. May 17 00:23:49.268048 systemd[1]: Reached target time-set.target - System Time Set. May 17 00:23:49.268727 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 17 00:23:49.269803 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 17 00:23:49.271781 systemd[1]: Reached target timers.target - Timer Units. May 17 00:23:49.273576 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 17 00:23:49.278765 systemd[1]: Starting docker.socket - Docker Socket for the API... May 17 00:23:49.283941 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 17 00:23:49.287665 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 17 00:23:49.290413 systemd[1]: Reached target sockets.target - Socket Units. May 17 00:23:49.291659 systemd[1]: Reached target basic.target - Basic System. May 17 00:23:49.293592 systemd[1]: System is tainted: cgroupsv1 May 17 00:23:49.293665 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 17 00:23:49.293693 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 17 00:23:49.301362 systemd[1]: Starting containerd.service - containerd container runtime... May 17 00:23:49.307470 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 17 00:23:49.320644 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 17 00:23:49.327391 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 17 00:23:49.339446 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 17 00:23:49.341957 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 17 00:23:49.356238 jq[1546]: false May 17 00:23:49.358202 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:49.362643 coreos-metadata[1544]: May 17 00:23:49.358 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:23:49.368258 dbus-daemon[1545]: [system] SELinux support is enabled May 17 00:23:49.373987 coreos-metadata[1544]: May 17 00:23:49.371 INFO Fetch successful May 17 00:23:49.374357 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 17 00:23:49.386710 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 17 00:23:49.392058 extend-filesystems[1549]: Found loop4 May 17 00:23:49.392058 extend-filesystems[1549]: Found loop5 May 17 00:23:49.400573 extend-filesystems[1549]: Found loop6 May 17 00:23:49.400573 extend-filesystems[1549]: Found loop7 May 17 00:23:49.400573 extend-filesystems[1549]: Found vda May 17 00:23:49.400573 extend-filesystems[1549]: Found vda1 May 17 00:23:49.400573 extend-filesystems[1549]: Found vda2 May 17 00:23:49.400573 extend-filesystems[1549]: Found vda3 May 17 00:23:49.400573 extend-filesystems[1549]: Found usr May 17 00:23:49.400573 extend-filesystems[1549]: Found vda4 May 17 00:23:49.400573 extend-filesystems[1549]: Found vda6 May 17 00:23:49.400573 extend-filesystems[1549]: Found vda7 May 17 00:23:49.400573 extend-filesystems[1549]: Found vda9 May 17 00:23:49.400573 extend-filesystems[1549]: Checking size of /dev/vda9 May 17 00:23:49.435749 extend-filesystems[1549]: Resized partition /dev/vda9 May 17 00:23:49.407475 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 17 00:23:49.429458 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 17 00:23:49.446346 extend-filesystems[1566]: resize2fs 1.47.1 (20-May-2024) May 17 00:23:49.446813 systemd[1]: Starting systemd-logind.service - User Login Management... May 17 00:23:49.450386 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:23:49.457055 systemd[1]: Starting update-engine.service - Update Engine... May 17 00:23:49.470233 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks May 17 00:23:49.466993 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 17 00:23:49.468097 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 17 00:23:49.505728 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (1225) May 17 00:23:49.508753 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:23:49.516164 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 17 00:23:49.535882 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:23:49.536269 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 17 00:23:49.537165 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:23:49.542073 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 17 00:23:49.551891 jq[1573]: true May 17 00:23:49.580394 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:23:49.580436 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 17 00:23:49.586415 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:23:49.586522 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). May 17 00:23:49.586556 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 17 00:23:49.619074 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 17 00:23:49.619854 (ntainerd)[1589]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 17 00:23:49.627508 jq[1588]: true May 17 00:23:49.645143 update_engine[1572]: I20250517 00:23:49.634626 1572 main.cc:92] Flatcar Update Engine starting May 17 00:23:49.674656 kernel: EXT4-fs (vda9): resized filesystem to 15121403 May 17 00:23:49.674894 update_engine[1572]: I20250517 00:23:49.653617 1572 update_check_scheduler.cc:74] Next update check in 5m13s May 17 00:23:49.654875 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 17 00:23:49.677944 extend-filesystems[1566]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required May 17 00:23:49.677944 extend-filesystems[1566]: old_desc_blocks = 1, new_desc_blocks = 8 May 17 00:23:49.677944 extend-filesystems[1566]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. May 17 00:23:49.706360 extend-filesystems[1549]: Resized filesystem in /dev/vda9 May 17 00:23:49.706360 extend-filesystems[1549]: Found vdb May 17 00:23:49.683903 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:23:49.684362 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 17 00:23:49.707143 systemd[1]: Started update-engine.service - Update Engine. May 17 00:23:49.713693 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 17 00:23:49.716178 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:23:49.723481 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 17 00:23:49.774628 systemd-logind[1567]: New seat seat0. May 17 00:23:49.777483 systemd-logind[1567]: Watching system buttons on /dev/input/event1 (Power Button) May 17 00:23:49.777516 systemd-logind[1567]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:23:49.778655 systemd[1]: Started systemd-logind.service - User Login Management. May 17 00:23:49.833622 bash[1631]: Updated "/home/core/.ssh/authorized_keys" May 17 00:23:49.839026 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 17 00:23:49.865705 systemd[1]: Starting sshkeys.service... May 17 00:23:49.929135 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. May 17 00:23:49.943644 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... May 17 00:23:50.066550 coreos-metadata[1637]: May 17 00:23:50.066 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 May 17 00:23:50.093675 coreos-metadata[1637]: May 17 00:23:50.091 INFO Fetch successful May 17 00:23:50.117803 unknown[1637]: wrote ssh authorized keys file for user: core May 17 00:23:50.164430 locksmithd[1613]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:23:50.197347 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" May 17 00:23:50.186620 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). May 17 00:23:50.199993 systemd[1]: Finished sshkeys.service. May 17 00:23:50.238110 containerd[1589]: time="2025-05-17T00:23:50.237957955Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 May 17 00:23:50.317898 sshd_keygen[1598]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:23:50.350235 containerd[1589]: time="2025-05-17T00:23:50.350041364Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:23:50.353781 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 17 00:23:50.358953 containerd[1589]: time="2025-05-17T00:23:50.358882145Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.90-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:50.358953 containerd[1589]: time="2025-05-17T00:23:50.358948177Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:23:50.359102 containerd[1589]: time="2025-05-17T00:23:50.358980936Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:23:50.360011 containerd[1589]: time="2025-05-17T00:23:50.359968520Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 May 17 00:23:50.360136 containerd[1589]: time="2025-05-17T00:23:50.360119543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 May 17 00:23:50.360331 containerd[1589]: time="2025-05-17T00:23:50.360303219Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:50.360470 containerd[1589]: time="2025-05-17T00:23:50.360455435Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:23:50.360811 containerd[1589]: time="2025-05-17T00:23:50.360789231Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:50.360878 containerd[1589]: time="2025-05-17T00:23:50.360866359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:23:50.360925 containerd[1589]: time="2025-05-17T00:23:50.360913582Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:50.361159 containerd[1589]: time="2025-05-17T00:23:50.360954950Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:23:50.361159 containerd[1589]: time="2025-05-17T00:23:50.361039327Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:23:50.361557 containerd[1589]: time="2025-05-17T00:23:50.361532492Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:23:50.362242 containerd[1589]: time="2025-05-17T00:23:50.361841530Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:23:50.362242 containerd[1589]: time="2025-05-17T00:23:50.361864413Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:23:50.362242 containerd[1589]: time="2025-05-17T00:23:50.361965188Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:23:50.362242 containerd[1589]: time="2025-05-17T00:23:50.362021948Z" level=info msg="metadata content store policy set" policy=shared May 17 00:23:50.369727 systemd[1]: Starting issuegen.service - Generate /run/issue... May 17 00:23:50.373827 containerd[1589]: time="2025-05-17T00:23:50.373583385Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:23:50.373827 containerd[1589]: time="2025-05-17T00:23:50.373720985Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:23:50.376212 containerd[1589]: time="2025-05-17T00:23:50.373749924Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 May 17 00:23:50.376369 containerd[1589]: time="2025-05-17T00:23:50.376231829Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 May 17 00:23:50.376369 containerd[1589]: time="2025-05-17T00:23:50.376277762Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:23:50.376606 containerd[1589]: time="2025-05-17T00:23:50.376575883Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.378809422Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379118528Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379152954Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379179358Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379243816Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379279825Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379306390Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379330021Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379354062Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379377338Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379397389Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379416196Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379448861Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380225 containerd[1589]: time="2025-05-17T00:23:50.379472039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379491621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379529141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379552220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379574988Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379593220Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379613193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379634252Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379656792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379678997Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379699576Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379763856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379802992Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379840141Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379860740Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:23:50.380736 containerd[1589]: time="2025-05-17T00:23:50.379879537Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:23:50.381258 containerd[1589]: time="2025-05-17T00:23:50.379964923Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:23:50.381258 containerd[1589]: time="2025-05-17T00:23:50.379998818Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 May 17 00:23:50.381258 containerd[1589]: time="2025-05-17T00:23:50.380016902Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:23:50.381258 containerd[1589]: time="2025-05-17T00:23:50.380035077Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 May 17 00:23:50.381258 containerd[1589]: time="2025-05-17T00:23:50.380051035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:23:50.381258 containerd[1589]: time="2025-05-17T00:23:50.380069424Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 May 17 00:23:50.381258 containerd[1589]: time="2025-05-17T00:23:50.380095152Z" level=info msg="NRI interface is disabled by configuration." May 17 00:23:50.381258 containerd[1589]: time="2025-05-17T00:23:50.380117897Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:23:50.382178 containerd[1589]: time="2025-05-17T00:23:50.382060857Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:23:50.383453 containerd[1589]: time="2025-05-17T00:23:50.383314699Z" level=info msg="Connect containerd service" May 17 00:23:50.383453 containerd[1589]: time="2025-05-17T00:23:50.383434343Z" level=info msg="using legacy CRI server" May 17 00:23:50.383540 containerd[1589]: time="2025-05-17T00:23:50.383454190Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 17 00:23:50.383757 containerd[1589]: time="2025-05-17T00:23:50.383711336Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:23:50.384878 containerd[1589]: time="2025-05-17T00:23:50.384798023Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:23:50.388226 containerd[1589]: time="2025-05-17T00:23:50.386655013Z" level=info msg="Start subscribing containerd event" May 17 00:23:50.388226 containerd[1589]: time="2025-05-17T00:23:50.386962418Z" level=info msg="Start recovering state" May 17 00:23:50.388226 containerd[1589]: time="2025-05-17T00:23:50.387083174Z" level=info msg="Start event monitor" May 17 00:23:50.388226 containerd[1589]: time="2025-05-17T00:23:50.387103938Z" level=info msg="Start snapshots syncer" May 17 00:23:50.388226 containerd[1589]: time="2025-05-17T00:23:50.387116278Z" level=info msg="Start cni network conf syncer for default" May 17 00:23:50.388226 containerd[1589]: time="2025-05-17T00:23:50.387130294Z" level=info msg="Start streaming server" May 17 00:23:50.396863 containerd[1589]: time="2025-05-17T00:23:50.392348049Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:23:50.396863 containerd[1589]: time="2025-05-17T00:23:50.392458949Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:23:50.392807 systemd[1]: Started containerd.service - containerd container runtime. May 17 00:23:50.397652 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:23:50.399423 systemd[1]: Finished issuegen.service - Generate /run/issue. May 17 00:23:50.403385 containerd[1589]: time="2025-05-17T00:23:50.403325748Z" level=info msg="containerd successfully booted in 0.167335s" May 17 00:23:50.411830 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 17 00:23:50.446888 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 17 00:23:50.456102 systemd[1]: Started getty@tty1.service - Getty on tty1. May 17 00:23:50.472719 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 17 00:23:50.476120 systemd[1]: Reached target getty.target - Login Prompts. May 17 00:23:51.387558 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:51.391095 systemd[1]: Reached target multi-user.target - Multi-User System. May 17 00:23:51.394157 systemd[1]: Startup finished in 7.009s (kernel) + 6.776s (userspace) = 13.785s. May 17 00:23:51.403044 (kubelet)[1690]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 17 00:23:52.131369 kubelet[1690]: E0517 00:23:52.131281 1690 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:23:52.135136 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:23:52.136337 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:23:52.793560 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 17 00:23:52.803669 systemd[1]: Started sshd@0-143.198.108.0:22-139.178.68.195:43898.service - OpenSSH per-connection server daemon (139.178.68.195:43898). May 17 00:23:52.890273 sshd[1703]: Accepted publickey for core from 139.178.68.195 port 43898 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:23:52.893518 sshd[1703]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:52.907317 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 17 00:23:52.913729 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 17 00:23:52.917743 systemd-logind[1567]: New session 1 of user core. May 17 00:23:52.941175 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 17 00:23:52.949612 systemd[1]: Starting user@500.service - User Manager for UID 500... May 17 00:23:52.968370 (systemd)[1709]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:23:53.094535 systemd[1709]: Queued start job for default target default.target. May 17 00:23:53.095562 systemd[1709]: Created slice app.slice - User Application Slice. May 17 00:23:53.095719 systemd[1709]: Reached target paths.target - Paths. May 17 00:23:53.095735 systemd[1709]: Reached target timers.target - Timers. May 17 00:23:53.111498 systemd[1709]: Starting dbus.socket - D-Bus User Message Bus Socket... May 17 00:23:53.121591 systemd[1709]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 17 00:23:53.121683 systemd[1709]: Reached target sockets.target - Sockets. May 17 00:23:53.121698 systemd[1709]: Reached target basic.target - Basic System. May 17 00:23:53.121774 systemd[1709]: Reached target default.target - Main User Target. May 17 00:23:53.121832 systemd[1709]: Startup finished in 143ms. May 17 00:23:53.122305 systemd[1]: Started user@500.service - User Manager for UID 500. May 17 00:23:53.139132 systemd[1]: Started session-1.scope - Session 1 of User core. May 17 00:23:53.209788 systemd[1]: Started sshd@1-143.198.108.0:22-139.178.68.195:43904.service - OpenSSH per-connection server daemon (139.178.68.195:43904). May 17 00:23:53.268506 sshd[1721]: Accepted publickey for core from 139.178.68.195 port 43904 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:23:53.270962 sshd[1721]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:53.278464 systemd-logind[1567]: New session 2 of user core. May 17 00:23:53.287763 systemd[1]: Started session-2.scope - Session 2 of User core. May 17 00:23:53.355392 sshd[1721]: pam_unix(sshd:session): session closed for user core May 17 00:23:53.364679 systemd[1]: Started sshd@2-143.198.108.0:22-139.178.68.195:43920.service - OpenSSH per-connection server daemon (139.178.68.195:43920). May 17 00:23:53.366409 systemd[1]: sshd@1-143.198.108.0:22-139.178.68.195:43904.service: Deactivated successfully. May 17 00:23:53.369902 systemd[1]: session-2.scope: Deactivated successfully. May 17 00:23:53.372600 systemd-logind[1567]: Session 2 logged out. Waiting for processes to exit. May 17 00:23:53.374145 systemd-logind[1567]: Removed session 2. May 17 00:23:53.416878 sshd[1727]: Accepted publickey for core from 139.178.68.195 port 43920 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:23:53.419051 sshd[1727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:53.427627 systemd-logind[1567]: New session 3 of user core. May 17 00:23:53.437929 systemd[1]: Started session-3.scope - Session 3 of User core. May 17 00:23:53.500363 sshd[1727]: pam_unix(sshd:session): session closed for user core May 17 00:23:53.505902 systemd[1]: sshd@2-143.198.108.0:22-139.178.68.195:43920.service: Deactivated successfully. May 17 00:23:53.509722 systemd-logind[1567]: Session 3 logged out. Waiting for processes to exit. May 17 00:23:53.526237 systemd[1]: Started sshd@3-143.198.108.0:22-139.178.68.195:48810.service - OpenSSH per-connection server daemon (139.178.68.195:48810). May 17 00:23:53.526977 systemd[1]: session-3.scope: Deactivated successfully. May 17 00:23:53.528966 systemd-logind[1567]: Removed session 3. May 17 00:23:53.579475 sshd[1737]: Accepted publickey for core from 139.178.68.195 port 48810 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:23:53.581517 sshd[1737]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:53.588547 systemd-logind[1567]: New session 4 of user core. May 17 00:23:53.594706 systemd[1]: Started session-4.scope - Session 4 of User core. May 17 00:23:53.664557 sshd[1737]: pam_unix(sshd:session): session closed for user core May 17 00:23:53.676577 systemd[1]: Started sshd@4-143.198.108.0:22-139.178.68.195:48820.service - OpenSSH per-connection server daemon (139.178.68.195:48820). May 17 00:23:53.677467 systemd[1]: sshd@3-143.198.108.0:22-139.178.68.195:48810.service: Deactivated successfully. May 17 00:23:53.685053 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:23:53.689935 systemd-logind[1567]: Session 4 logged out. Waiting for processes to exit. May 17 00:23:53.692328 systemd-logind[1567]: Removed session 4. May 17 00:23:53.730475 sshd[1742]: Accepted publickey for core from 139.178.68.195 port 48820 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:23:53.732600 sshd[1742]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:53.740030 systemd-logind[1567]: New session 5 of user core. May 17 00:23:53.750771 systemd[1]: Started session-5.scope - Session 5 of User core. May 17 00:23:53.831737 sudo[1749]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 May 17 00:23:53.832366 sudo[1749]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:53.846364 sudo[1749]: pam_unix(sudo:session): session closed for user root May 17 00:23:53.851237 sshd[1742]: pam_unix(sshd:session): session closed for user core May 17 00:23:53.864003 systemd[1]: Started sshd@5-143.198.108.0:22-139.178.68.195:48832.service - OpenSSH per-connection server daemon (139.178.68.195:48832). May 17 00:23:53.865041 systemd[1]: sshd@4-143.198.108.0:22-139.178.68.195:48820.service: Deactivated successfully. May 17 00:23:53.868046 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:23:53.871021 systemd-logind[1567]: Session 5 logged out. Waiting for processes to exit. May 17 00:23:53.874086 systemd-logind[1567]: Removed session 5. May 17 00:23:53.918640 sshd[1752]: Accepted publickey for core from 139.178.68.195 port 48832 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:23:53.920002 sshd[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:53.927538 systemd-logind[1567]: New session 6 of user core. May 17 00:23:53.934805 systemd[1]: Started session-6.scope - Session 6 of User core. May 17 00:23:54.041366 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules May 17 00:23:54.041754 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:54.047635 sudo[1759]: pam_unix(sudo:session): session closed for user root May 17 00:23:54.055402 sudo[1758]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules May 17 00:23:54.055733 sudo[1758]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:54.078741 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... May 17 00:23:54.081525 auditctl[1762]: No rules May 17 00:23:54.083199 systemd[1]: audit-rules.service: Deactivated successfully. May 17 00:23:54.083560 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. May 17 00:23:54.087879 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... May 17 00:23:54.136286 augenrules[1781]: No rules May 17 00:23:54.138808 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. May 17 00:23:54.142034 sudo[1758]: pam_unix(sudo:session): session closed for user root May 17 00:23:54.147964 sshd[1752]: pam_unix(sshd:session): session closed for user core May 17 00:23:54.156679 systemd[1]: Started sshd@6-143.198.108.0:22-139.178.68.195:48836.service - OpenSSH per-connection server daemon (139.178.68.195:48836). May 17 00:23:54.159441 systemd[1]: sshd@5-143.198.108.0:22-139.178.68.195:48832.service: Deactivated successfully. May 17 00:23:54.166429 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:23:54.169542 systemd-logind[1567]: Session 6 logged out. Waiting for processes to exit. May 17 00:23:54.171580 systemd-logind[1567]: Removed session 6. May 17 00:23:54.211777 sshd[1787]: Accepted publickey for core from 139.178.68.195 port 48836 ssh2: RSA SHA256:TM7Vm5JNsRT9OkRUxlGPKsAsv9oxy8GzboZ61mm4KqQ May 17 00:23:54.214250 sshd[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 17 00:23:54.221986 systemd-logind[1567]: New session 7 of user core. May 17 00:23:54.227711 systemd[1]: Started session-7.scope - Session 7 of User core. May 17 00:23:54.290319 sudo[1794]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:23:54.290796 sudo[1794]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 17 00:23:55.061915 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:55.071740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:55.125453 systemd[1]: Reloading requested from client PID 1828 ('systemctl') (unit session-7.scope)... May 17 00:23:55.125655 systemd[1]: Reloading... May 17 00:23:55.307302 zram_generator::config[1863]: No configuration found. May 17 00:23:55.548365 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:23:55.632821 kernel: hrtimer: interrupt took 2465327 ns May 17 00:23:55.934519 systemd[1]: Reloading finished in 808 ms. May 17 00:23:55.989068 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:23:55.989389 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:23:55.990156 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:56.000152 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 17 00:23:56.197506 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 17 00:23:56.214961 (kubelet)[1929]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 17 00:23:56.283515 kubelet[1929]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:56.283515 kubelet[1929]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:23:56.283515 kubelet[1929]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:23:56.284340 kubelet[1929]: I0517 00:23:56.283597 1929 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:23:56.579481 kubelet[1929]: I0517 00:23:56.578783 1929 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:23:56.579481 kubelet[1929]: I0517 00:23:56.579275 1929 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:23:56.579686 kubelet[1929]: I0517 00:23:56.579621 1929 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:23:56.612157 kubelet[1929]: I0517 00:23:56.611322 1929 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:23:56.622129 kubelet[1929]: E0517 00:23:56.622087 1929 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:23:56.622386 kubelet[1929]: I0517 00:23:56.622373 1929 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:23:56.629128 kubelet[1929]: I0517 00:23:56.629085 1929 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:23:56.630529 kubelet[1929]: I0517 00:23:56.630494 1929 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:23:56.630936 kubelet[1929]: I0517 00:23:56.630889 1929 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:23:56.631253 kubelet[1929]: I0517 00:23:56.631012 1929 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"143.198.108.0","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} May 17 00:23:56.631521 kubelet[1929]: I0517 00:23:56.631508 1929 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:23:56.631587 kubelet[1929]: I0517 00:23:56.631580 1929 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:23:56.631783 kubelet[1929]: I0517 00:23:56.631768 1929 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:56.636235 kubelet[1929]: I0517 00:23:56.636160 1929 kubelet.go:408] "Attempting to sync node with API server" May 17 00:23:56.636798 kubelet[1929]: I0517 00:23:56.636401 1929 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:23:56.636798 kubelet[1929]: I0517 00:23:56.636460 1929 kubelet.go:314] "Adding apiserver pod source" May 17 00:23:56.636798 kubelet[1929]: I0517 00:23:56.636498 1929 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:23:56.639593 kubelet[1929]: E0517 00:23:56.639499 1929 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:23:56.640605 kubelet[1929]: E0517 00:23:56.640572 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:23:56.641696 kubelet[1929]: I0517 00:23:56.640598 1929 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" May 17 00:23:56.641696 kubelet[1929]: I0517 00:23:56.641449 1929 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:23:56.643123 kubelet[1929]: W0517 00:23:56.642361 1929 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:23:56.645268 kubelet[1929]: I0517 00:23:56.645021 1929 server.go:1274] "Started kubelet" May 17 00:23:56.645482 kubelet[1929]: I0517 00:23:56.645400 1929 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:23:56.646767 kubelet[1929]: I0517 00:23:56.646739 1929 server.go:449] "Adding debug handlers to kubelet server" May 17 00:23:56.656305 kubelet[1929]: I0517 00:23:56.654588 1929 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:23:56.656305 kubelet[1929]: I0517 00:23:56.654861 1929 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:23:56.656305 kubelet[1929]: I0517 00:23:56.654876 1929 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:23:56.662117 kubelet[1929]: I0517 00:23:56.662076 1929 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:23:56.678689 kubelet[1929]: I0517 00:23:56.677486 1929 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:23:56.678689 kubelet[1929]: E0517 00:23:56.677718 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:56.679404 kubelet[1929]: I0517 00:23:56.679376 1929 factory.go:221] Registration of the systemd container factory successfully May 17 00:23:56.679589 kubelet[1929]: I0517 00:23:56.679530 1929 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:23:56.683247 kubelet[1929]: I0517 00:23:56.682106 1929 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:23:56.683247 kubelet[1929]: I0517 00:23:56.682248 1929 reconciler.go:26] "Reconciler: start to sync state" May 17 00:23:56.685864 kubelet[1929]: E0517 00:23:56.682592 1929 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.198.108.0.184028ba7105b04d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.198.108.0,UID:143.198.108.0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:143.198.108.0,},FirstTimestamp:2025-05-17 00:23:56.644978765 +0000 UTC m=+0.423730675,LastTimestamp:2025-05-17 00:23:56.644978765 +0000 UTC m=+0.423730675,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.198.108.0,}" May 17 00:23:56.688243 kubelet[1929]: E0517 00:23:56.688197 1929 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:23:56.690100 kubelet[1929]: I0517 00:23:56.690046 1929 factory.go:221] Registration of the containerd container factory successfully May 17 00:23:56.731546 kubelet[1929]: E0517 00:23:56.731340 1929 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.198.108.0.184028ba719cc96d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.198.108.0,UID:143.198.108.0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:CgroupV1,Message:Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.,Source:EventSource{Component:kubelet,Host:143.198.108.0,},FirstTimestamp:2025-05-17 00:23:56.654881133 +0000 UTC m=+0.433632989,LastTimestamp:2025-05-17 00:23:56.654881133 +0000 UTC m=+0.433632989,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.198.108.0,}" May 17 00:23:56.732549 kubelet[1929]: W0517 00:23:56.732168 1929 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "143.198.108.0" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope May 17 00:23:56.732549 kubelet[1929]: E0517 00:23:56.732243 1929 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes \"143.198.108.0\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" May 17 00:23:56.732549 kubelet[1929]: W0517 00:23:56.732313 1929 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope May 17 00:23:56.732549 kubelet[1929]: E0517 00:23:56.732329 1929 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" May 17 00:23:56.732549 kubelet[1929]: E0517 00:23:56.732428 1929 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"143.198.108.0\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" May 17 00:23:56.733136 kubelet[1929]: W0517 00:23:56.732944 1929 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope May 17 00:23:56.733136 kubelet[1929]: E0517 00:23:56.732974 1929 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" May 17 00:23:56.734228 kubelet[1929]: I0517 00:23:56.734124 1929 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:23:56.734228 kubelet[1929]: I0517 00:23:56.734149 1929 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:23:56.734228 kubelet[1929]: I0517 00:23:56.734174 1929 state_mem.go:36] "Initialized new in-memory state store" May 17 00:23:56.735472 kubelet[1929]: E0517 00:23:56.734996 1929 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{143.198.108.0.184028ba7398a357 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:143.198.108.0,UID:143.198.108.0,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:143.198.108.0,},FirstTimestamp:2025-05-17 00:23:56.688163671 +0000 UTC m=+0.466915526,LastTimestamp:2025-05-17 00:23:56.688163671 +0000 UTC m=+0.466915526,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:143.198.108.0,}" May 17 00:23:56.740456 kubelet[1929]: I0517 00:23:56.740414 1929 policy_none.go:49] "None policy: Start" May 17 00:23:56.743111 kubelet[1929]: I0517 00:23:56.743046 1929 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:23:56.743292 kubelet[1929]: I0517 00:23:56.743131 1929 state_mem.go:35] "Initializing new in-memory state store" May 17 00:23:56.752943 kubelet[1929]: I0517 00:23:56.752620 1929 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:23:56.753794 kubelet[1929]: I0517 00:23:56.753481 1929 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:23:56.753794 kubelet[1929]: I0517 00:23:56.753503 1929 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:23:56.755541 kubelet[1929]: I0517 00:23:56.755510 1929 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:23:56.768217 kubelet[1929]: E0517 00:23:56.766543 1929 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"143.198.108.0\" not found" May 17 00:23:56.785231 kubelet[1929]: I0517 00:23:56.785154 1929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:23:56.787299 kubelet[1929]: I0517 00:23:56.787061 1929 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:23:56.787299 kubelet[1929]: I0517 00:23:56.787092 1929 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:23:56.787299 kubelet[1929]: I0517 00:23:56.787120 1929 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:23:56.787549 kubelet[1929]: E0517 00:23:56.787532 1929 kubelet.go:2345] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" May 17 00:23:56.867028 kubelet[1929]: I0517 00:23:56.864974 1929 kubelet_node_status.go:72] "Attempting to register node" node="143.198.108.0" May 17 00:23:56.882989 kubelet[1929]: I0517 00:23:56.882906 1929 kubelet_node_status.go:75] "Successfully registered node" node="143.198.108.0" May 17 00:23:56.882989 kubelet[1929]: E0517 00:23:56.882960 1929 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"143.198.108.0\": node \"143.198.108.0\" not found" May 17 00:23:56.975402 kubelet[1929]: E0517 00:23:56.975327 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.076493 kubelet[1929]: E0517 00:23:57.076414 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.176838 kubelet[1929]: E0517 00:23:57.176650 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.277536 kubelet[1929]: E0517 00:23:57.277464 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.378342 kubelet[1929]: E0517 00:23:57.378288 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.479403 kubelet[1929]: E0517 00:23:57.479235 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.580225 kubelet[1929]: E0517 00:23:57.580108 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.623791 kubelet[1929]: I0517 00:23:57.623688 1929 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" May 17 00:23:57.624127 kubelet[1929]: W0517 00:23:57.624085 1929 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.RuntimeClass ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received May 17 00:23:57.641533 kubelet[1929]: E0517 00:23:57.641462 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:23:57.681106 kubelet[1929]: E0517 00:23:57.681039 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.753025 sudo[1794]: pam_unix(sudo:session): session closed for user root May 17 00:23:57.756893 sshd[1787]: pam_unix(sshd:session): session closed for user core May 17 00:23:57.764739 systemd[1]: sshd@6-143.198.108.0:22-139.178.68.195:48836.service: Deactivated successfully. May 17 00:23:57.768208 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:23:57.769811 systemd-logind[1567]: Session 7 logged out. Waiting for processes to exit. May 17 00:23:57.772025 systemd-logind[1567]: Removed session 7. May 17 00:23:57.781756 kubelet[1929]: E0517 00:23:57.781688 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.883037 kubelet[1929]: E0517 00:23:57.882959 1929 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"143.198.108.0\" not found" May 17 00:23:57.984386 kubelet[1929]: I0517 00:23:57.984349 1929 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" May 17 00:23:57.985003 containerd[1589]: time="2025-05-17T00:23:57.984925497Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:23:57.986300 kubelet[1929]: I0517 00:23:57.985846 1929 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" May 17 00:23:58.642373 kubelet[1929]: I0517 00:23:58.642303 1929 apiserver.go:52] "Watching apiserver" May 17 00:23:58.643019 kubelet[1929]: E0517 00:23:58.642303 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:23:58.675721 kubelet[1929]: E0517 00:23:58.675644 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bvml" podUID="77e2009c-40a3-47c4-b9d0-5b99ba0c6d66" May 17 00:23:58.683124 kubelet[1929]: I0517 00:23:58.683078 1929 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:23:58.731971 kubelet[1929]: I0517 00:23:58.731610 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-lib-modules\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.731971 kubelet[1929]: I0517 00:23:58.731676 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-node-certs\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.731971 kubelet[1929]: I0517 00:23:58.731712 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-policysync\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.731971 kubelet[1929]: I0517 00:23:58.731739 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-var-run-calico\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.731971 kubelet[1929]: I0517 00:23:58.731764 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/77e2009c-40a3-47c4-b9d0-5b99ba0c6d66-registration-dir\") pod \"csi-node-driver-4bvml\" (UID: \"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66\") " pod="calico-system/csi-node-driver-4bvml" May 17 00:23:58.732333 kubelet[1929]: I0517 00:23:58.731798 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ld2jf\" (UniqueName: \"kubernetes.io/projected/77e2009c-40a3-47c4-b9d0-5b99ba0c6d66-kube-api-access-ld2jf\") pod \"csi-node-driver-4bvml\" (UID: \"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66\") " pod="calico-system/csi-node-driver-4bvml" May 17 00:23:58.732333 kubelet[1929]: I0517 00:23:58.731825 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/257e7cbc-12fc-4960-997c-b581102a376c-kube-proxy\") pod \"kube-proxy-rhlbb\" (UID: \"257e7cbc-12fc-4960-997c-b581102a376c\") " pod="kube-system/kube-proxy-rhlbb" May 17 00:23:58.732333 kubelet[1929]: I0517 00:23:58.731849 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-cni-bin-dir\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.732333 kubelet[1929]: I0517 00:23:58.731892 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-cni-net-dir\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.732333 kubelet[1929]: I0517 00:23:58.731919 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-flexvol-driver-host\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.732970 kubelet[1929]: I0517 00:23:58.732596 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-var-lib-calico\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.732970 kubelet[1929]: I0517 00:23:58.732646 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/77e2009c-40a3-47c4-b9d0-5b99ba0c6d66-socket-dir\") pod \"csi-node-driver-4bvml\" (UID: \"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66\") " pod="calico-system/csi-node-driver-4bvml" May 17 00:23:58.732970 kubelet[1929]: I0517 00:23:58.732673 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v75wx\" (UniqueName: \"kubernetes.io/projected/257e7cbc-12fc-4960-997c-b581102a376c-kube-api-access-v75wx\") pod \"kube-proxy-rhlbb\" (UID: \"257e7cbc-12fc-4960-997c-b581102a376c\") " pod="kube-system/kube-proxy-rhlbb" May 17 00:23:58.732970 kubelet[1929]: I0517 00:23:58.732697 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-tigera-ca-bundle\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.732970 kubelet[1929]: I0517 00:23:58.732726 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-xtables-lock\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.733175 kubelet[1929]: I0517 00:23:58.732750 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/77e2009c-40a3-47c4-b9d0-5b99ba0c6d66-kubelet-dir\") pod \"csi-node-driver-4bvml\" (UID: \"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66\") " pod="calico-system/csi-node-driver-4bvml" May 17 00:23:58.733175 kubelet[1929]: I0517 00:23:58.732773 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/77e2009c-40a3-47c4-b9d0-5b99ba0c6d66-varrun\") pod \"csi-node-driver-4bvml\" (UID: \"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66\") " pod="calico-system/csi-node-driver-4bvml" May 17 00:23:58.733175 kubelet[1929]: I0517 00:23:58.732797 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/257e7cbc-12fc-4960-997c-b581102a376c-xtables-lock\") pod \"kube-proxy-rhlbb\" (UID: \"257e7cbc-12fc-4960-997c-b581102a376c\") " pod="kube-system/kube-proxy-rhlbb" May 17 00:23:58.733175 kubelet[1929]: I0517 00:23:58.732823 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/257e7cbc-12fc-4960-997c-b581102a376c-lib-modules\") pod \"kube-proxy-rhlbb\" (UID: \"257e7cbc-12fc-4960-997c-b581102a376c\") " pod="kube-system/kube-proxy-rhlbb" May 17 00:23:58.733175 kubelet[1929]: I0517 00:23:58.732850 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-cni-log-dir\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.733593 kubelet[1929]: I0517 00:23:58.732894 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8tfhz\" (UniqueName: \"kubernetes.io/projected/b6c77cdd-f59f-4e30-9656-982c2b3ec05e-kube-api-access-8tfhz\") pod \"calico-node-wfrz6\" (UID: \"b6c77cdd-f59f-4e30-9656-982c2b3ec05e\") " pod="calico-system/calico-node-wfrz6" May 17 00:23:58.843517 kubelet[1929]: E0517 00:23:58.843420 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:58.843517 kubelet[1929]: W0517 00:23:58.843474 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:58.843517 kubelet[1929]: E0517 00:23:58.843501 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:58.876752 kubelet[1929]: E0517 00:23:58.876709 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:58.876752 kubelet[1929]: W0517 00:23:58.876747 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:58.877077 kubelet[1929]: E0517 00:23:58.876782 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:58.885235 kubelet[1929]: E0517 00:23:58.884903 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:58.885235 kubelet[1929]: W0517 00:23:58.884928 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:58.885235 kubelet[1929]: E0517 00:23:58.884956 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:58.893680 kubelet[1929]: E0517 00:23:58.892208 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:23:58.893680 kubelet[1929]: W0517 00:23:58.892239 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:23:58.893680 kubelet[1929]: E0517 00:23:58.892276 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:23:58.980794 kubelet[1929]: E0517 00:23:58.980598 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:23:58.981844 containerd[1589]: time="2025-05-17T00:23:58.981383415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhlbb,Uid:257e7cbc-12fc-4960-997c-b581102a376c,Namespace:kube-system,Attempt:0,}" May 17 00:23:58.982292 containerd[1589]: time="2025-05-17T00:23:58.982262593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wfrz6,Uid:b6c77cdd-f59f-4e30-9656-982c2b3ec05e,Namespace:calico-system,Attempt:0,}" May 17 00:23:59.642645 kubelet[1929]: E0517 00:23:59.642595 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:23:59.704240 containerd[1589]: time="2025-05-17T00:23:59.703393489Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.706248 containerd[1589]: time="2025-05-17T00:23:59.705917265Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.708119 containerd[1589]: time="2025-05-17T00:23:59.708036725Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" May 17 00:23:59.708706 containerd[1589]: time="2025-05-17T00:23:59.708648436Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" May 17 00:23:59.714653 containerd[1589]: time="2025-05-17T00:23:59.714592749Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.717505 containerd[1589]: time="2025-05-17T00:23:59.717371872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 17 00:23:59.720057 containerd[1589]: time="2025-05-17T00:23:59.719383775Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 737.875071ms" May 17 00:23:59.721112 containerd[1589]: time="2025-05-17T00:23:59.721065218Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 738.627455ms" May 17 00:23:59.849014 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4185256060.mount: Deactivated successfully. May 17 00:23:59.899289 containerd[1589]: time="2025-05-17T00:23:59.896534029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:59.899289 containerd[1589]: time="2025-05-17T00:23:59.896619127Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:59.899289 containerd[1589]: time="2025-05-17T00:23:59.896648978Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.899289 containerd[1589]: time="2025-05-17T00:23:59.896820458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.901514 containerd[1589]: time="2025-05-17T00:23:59.901361186Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:23:59.901514 containerd[1589]: time="2025-05-17T00:23:59.901461968Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:23:59.901694 containerd[1589]: time="2025-05-17T00:23:59.901487198Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:23:59.901694 containerd[1589]: time="2025-05-17T00:23:59.901628930Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:00.072517 containerd[1589]: time="2025-05-17T00:24:00.072353678Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-rhlbb,Uid:257e7cbc-12fc-4960-997c-b581102a376c,Namespace:kube-system,Attempt:0,} returns sandbox id \"79cb95c1c519d4c34f90b4a979d1b46e967cb3f9b5dc4ae79b9e4ba2197395bb\"" May 17 00:24:00.075384 kubelet[1929]: E0517 00:24:00.075349 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:00.080214 containerd[1589]: time="2025-05-17T00:24:00.080129620Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:24:00.085816 containerd[1589]: time="2025-05-17T00:24:00.085763517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-wfrz6,Uid:b6c77cdd-f59f-4e30-9656-982c2b3ec05e,Namespace:calico-system,Attempt:0,} returns sandbox id \"23b4663775fa6e8aaca590e0b6ece1dd9e49d957570fa425faa93893e9243905\"" May 17 00:24:00.644143 kubelet[1929]: E0517 00:24:00.644047 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:00.787613 kubelet[1929]: E0517 00:24:00.787534 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bvml" podUID="77e2009c-40a3-47c4-b9d0-5b99ba0c6d66" May 17 00:24:01.415357 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2701077721.mount: Deactivated successfully. May 17 00:24:01.645023 kubelet[1929]: E0517 00:24:01.644964 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:02.072387 containerd[1589]: time="2025-05-17T00:24:02.072304028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:02.073684 containerd[1589]: time="2025-05-17T00:24:02.073591938Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.9: active requests=0, bytes read=30355623" May 17 00:24:02.075228 containerd[1589]: time="2025-05-17T00:24:02.075104711Z" level=info msg="ImageCreate event name:\"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:02.077827 containerd[1589]: time="2025-05-17T00:24:02.077670622Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:02.079208 containerd[1589]: time="2025-05-17T00:24:02.079032954Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.9\" with image id \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\", repo tag \"registry.k8s.io/kube-proxy:v1.31.9\", repo digest \"registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1\", size \"30354642\" in 1.998342693s" May 17 00:24:02.079208 containerd[1589]: time="2025-05-17T00:24:02.079083863Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:24:02.082064 containerd[1589]: time="2025-05-17T00:24:02.081791009Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\"" May 17 00:24:02.083321 containerd[1589]: time="2025-05-17T00:24:02.083264026Z" level=info msg="CreateContainer within sandbox \"79cb95c1c519d4c34f90b4a979d1b46e967cb3f9b5dc4ae79b9e4ba2197395bb\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:24:02.111448 containerd[1589]: time="2025-05-17T00:24:02.111319708Z" level=info msg="CreateContainer within sandbox \"79cb95c1c519d4c34f90b4a979d1b46e967cb3f9b5dc4ae79b9e4ba2197395bb\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2850a6a2268fdfff969b7132ac651fad526ff6da504144ac9201d658f7dbbbad\"" May 17 00:24:02.113150 containerd[1589]: time="2025-05-17T00:24:02.113006380Z" level=info msg="StartContainer for \"2850a6a2268fdfff969b7132ac651fad526ff6da504144ac9201d658f7dbbbad\"" May 17 00:24:02.217293 containerd[1589]: time="2025-05-17T00:24:02.217223087Z" level=info msg="StartContainer for \"2850a6a2268fdfff969b7132ac651fad526ff6da504144ac9201d658f7dbbbad\" returns successfully" May 17 00:24:02.647764 kubelet[1929]: E0517 00:24:02.645443 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:02.788621 kubelet[1929]: E0517 00:24:02.787591 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bvml" podUID="77e2009c-40a3-47c4-b9d0-5b99ba0c6d66" May 17 00:24:02.815549 kubelet[1929]: E0517 00:24:02.815177 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:02.849869 kubelet[1929]: I0517 00:24:02.849790 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-rhlbb" podStartSLOduration=3.848007406 podStartE2EDuration="5.849765639s" podCreationTimestamp="2025-05-17 00:23:57 +0000 UTC" firstStartedPulling="2025-05-17 00:24:00.078999174 +0000 UTC m=+3.857751018" lastFinishedPulling="2025-05-17 00:24:02.080757404 +0000 UTC m=+5.859509251" observedRunningTime="2025-05-17 00:24:02.849332426 +0000 UTC m=+6.628084304" watchObservedRunningTime="2025-05-17 00:24:02.849765639 +0000 UTC m=+6.628517509" May 17 00:24:02.908932 kubelet[1929]: E0517 00:24:02.908706 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.908932 kubelet[1929]: W0517 00:24:02.908772 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.908932 kubelet[1929]: E0517 00:24:02.908839 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.909564 kubelet[1929]: E0517 00:24:02.909238 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.909564 kubelet[1929]: W0517 00:24:02.909259 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.909564 kubelet[1929]: E0517 00:24:02.909297 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.909904 kubelet[1929]: E0517 00:24:02.909887 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.909936 kubelet[1929]: W0517 00:24:02.909919 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.909936 kubelet[1929]: E0517 00:24:02.909933 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.910162 kubelet[1929]: E0517 00:24:02.910135 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.910162 kubelet[1929]: W0517 00:24:02.910160 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.910243 kubelet[1929]: E0517 00:24:02.910169 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.910389 kubelet[1929]: E0517 00:24:02.910377 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.910389 kubelet[1929]: W0517 00:24:02.910389 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.910389 kubelet[1929]: E0517 00:24:02.910397 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.910573 kubelet[1929]: E0517 00:24:02.910562 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.910573 kubelet[1929]: W0517 00:24:02.910572 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.910629 kubelet[1929]: E0517 00:24:02.910589 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.910848 kubelet[1929]: E0517 00:24:02.910829 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.910887 kubelet[1929]: W0517 00:24:02.910850 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.910887 kubelet[1929]: E0517 00:24:02.910864 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.911150 kubelet[1929]: E0517 00:24:02.911131 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.911150 kubelet[1929]: W0517 00:24:02.911151 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.911280 kubelet[1929]: E0517 00:24:02.911165 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.911455 kubelet[1929]: E0517 00:24:02.911437 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.911455 kubelet[1929]: W0517 00:24:02.911454 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.911539 kubelet[1929]: E0517 00:24:02.911468 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.911659 kubelet[1929]: E0517 00:24:02.911647 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.911659 kubelet[1929]: W0517 00:24:02.911659 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.911708 kubelet[1929]: E0517 00:24:02.911668 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.912000 kubelet[1929]: E0517 00:24:02.911881 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.912000 kubelet[1929]: W0517 00:24:02.911898 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.912000 kubelet[1929]: E0517 00:24:02.911908 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.912243 kubelet[1929]: E0517 00:24:02.912218 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.912243 kubelet[1929]: W0517 00:24:02.912233 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.912243 kubelet[1929]: E0517 00:24:02.912243 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.912474 kubelet[1929]: E0517 00:24:02.912453 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.912474 kubelet[1929]: W0517 00:24:02.912472 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.912540 kubelet[1929]: E0517 00:24:02.912484 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.912664 kubelet[1929]: E0517 00:24:02.912648 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.912664 kubelet[1929]: W0517 00:24:02.912659 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.912713 kubelet[1929]: E0517 00:24:02.912666 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.912816 kubelet[1929]: E0517 00:24:02.912801 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.912816 kubelet[1929]: W0517 00:24:02.912811 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.912882 kubelet[1929]: E0517 00:24:02.912819 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.913003 kubelet[1929]: E0517 00:24:02.912984 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.913003 kubelet[1929]: W0517 00:24:02.912999 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.913046 kubelet[1929]: E0517 00:24:02.913009 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.913352 kubelet[1929]: E0517 00:24:02.913317 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.913395 kubelet[1929]: W0517 00:24:02.913353 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.913395 kubelet[1929]: E0517 00:24:02.913364 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.913542 kubelet[1929]: E0517 00:24:02.913528 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.913542 kubelet[1929]: W0517 00:24:02.913539 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.913613 kubelet[1929]: E0517 00:24:02.913547 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.913703 kubelet[1929]: E0517 00:24:02.913692 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.913703 kubelet[1929]: W0517 00:24:02.913703 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.913761 kubelet[1929]: E0517 00:24:02.913710 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.913878 kubelet[1929]: E0517 00:24:02.913867 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.913878 kubelet[1929]: W0517 00:24:02.913878 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.913923 kubelet[1929]: E0517 00:24:02.913885 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.970045 kubelet[1929]: E0517 00:24:02.969989 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.970045 kubelet[1929]: W0517 00:24:02.970024 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.970045 kubelet[1929]: E0517 00:24:02.970050 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.970403 kubelet[1929]: E0517 00:24:02.970378 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.970403 kubelet[1929]: W0517 00:24:02.970392 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.970468 kubelet[1929]: E0517 00:24:02.970432 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.970736 kubelet[1929]: E0517 00:24:02.970705 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.970736 kubelet[1929]: W0517 00:24:02.970729 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.970847 kubelet[1929]: E0517 00:24:02.970752 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.970982 kubelet[1929]: E0517 00:24:02.970967 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.971015 kubelet[1929]: W0517 00:24:02.970984 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.971038 kubelet[1929]: E0517 00:24:02.971012 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.971300 kubelet[1929]: E0517 00:24:02.971284 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.971300 kubelet[1929]: W0517 00:24:02.971298 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.971497 kubelet[1929]: E0517 00:24:02.971319 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.971626 kubelet[1929]: E0517 00:24:02.971606 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.971626 kubelet[1929]: W0517 00:24:02.971624 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.971725 kubelet[1929]: E0517 00:24:02.971655 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.972374 kubelet[1929]: E0517 00:24:02.972293 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.972374 kubelet[1929]: W0517 00:24:02.972305 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.972374 kubelet[1929]: E0517 00:24:02.972323 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.972591 kubelet[1929]: E0517 00:24:02.972497 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.972591 kubelet[1929]: W0517 00:24:02.972506 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.972591 kubelet[1929]: E0517 00:24:02.972522 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.972674 kubelet[1929]: E0517 00:24:02.972667 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.972713 kubelet[1929]: W0517 00:24:02.972674 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.972713 kubelet[1929]: E0517 00:24:02.972683 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.972866 kubelet[1929]: E0517 00:24:02.972854 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.972866 kubelet[1929]: W0517 00:24:02.972865 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.972917 kubelet[1929]: E0517 00:24:02.972890 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.973290 kubelet[1929]: E0517 00:24:02.973263 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.973290 kubelet[1929]: W0517 00:24:02.973280 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.973290 kubelet[1929]: E0517 00:24:02.973300 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:02.973541 kubelet[1929]: E0517 00:24:02.973518 1929 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input May 17 00:24:02.973541 kubelet[1929]: W0517 00:24:02.973536 1929 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" May 17 00:24:02.973618 kubelet[1929]: E0517 00:24:02.973550 1929 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" May 17 00:24:03.423593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount239343261.mount: Deactivated successfully. May 17 00:24:03.540692 containerd[1589]: time="2025-05-17T00:24:03.539740820Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:03.541206 containerd[1589]: time="2025-05-17T00:24:03.541145676Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0: active requests=0, bytes read=5934460" May 17 00:24:03.541332 containerd[1589]: time="2025-05-17T00:24:03.541313359Z" level=info msg="ImageCreate event name:\"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:03.543532 containerd[1589]: time="2025-05-17T00:24:03.543481898Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:03.544837 containerd[1589]: time="2025-05-17T00:24:03.544682247Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" with image id \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:ce76dd87f11d3fd0054c35ad2e0e9f833748d007f77a9bfe859d0ddcb66fcb2c\", size \"5934282\" in 1.462835326s" May 17 00:24:03.544837 containerd[1589]: time="2025-05-17T00:24:03.544724455Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.0\" returns image reference \"sha256:c53606cea03e59dcbfa981dc43a55dff05952895f72576b8389fa00be09ab676\"" May 17 00:24:03.547374 containerd[1589]: time="2025-05-17T00:24:03.547336169Z" level=info msg="CreateContainer within sandbox \"23b4663775fa6e8aaca590e0b6ece1dd9e49d957570fa425faa93893e9243905\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" May 17 00:24:03.567580 containerd[1589]: time="2025-05-17T00:24:03.567531835Z" level=info msg="CreateContainer within sandbox \"23b4663775fa6e8aaca590e0b6ece1dd9e49d957570fa425faa93893e9243905\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"f2c4cd6af2614fef595984c18ebdaa3fa4723030edc4e939323f8f7c6c9b9718\"" May 17 00:24:03.568662 containerd[1589]: time="2025-05-17T00:24:03.568476452Z" level=info msg="StartContainer for \"f2c4cd6af2614fef595984c18ebdaa3fa4723030edc4e939323f8f7c6c9b9718\"" May 17 00:24:03.646273 kubelet[1929]: E0517 00:24:03.646227 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:03.659846 containerd[1589]: time="2025-05-17T00:24:03.659678408Z" level=info msg="StartContainer for \"f2c4cd6af2614fef595984c18ebdaa3fa4723030edc4e939323f8f7c6c9b9718\" returns successfully" May 17 00:24:03.835280 kubelet[1929]: E0517 00:24:03.819227 1929 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.3 67.207.67.2 67.207.67.3" May 17 00:24:03.851421 containerd[1589]: time="2025-05-17T00:24:03.851045409Z" level=info msg="shim disconnected" id=f2c4cd6af2614fef595984c18ebdaa3fa4723030edc4e939323f8f7c6c9b9718 namespace=k8s.io May 17 00:24:03.851421 containerd[1589]: time="2025-05-17T00:24:03.851201827Z" level=warning msg="cleaning up after shim disconnected" id=f2c4cd6af2614fef595984c18ebdaa3fa4723030edc4e939323f8f7c6c9b9718 namespace=k8s.io May 17 00:24:03.851421 containerd[1589]: time="2025-05-17T00:24:03.851225319Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:04.364476 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f2c4cd6af2614fef595984c18ebdaa3fa4723030edc4e939323f8f7c6c9b9718-rootfs.mount: Deactivated successfully. May 17 00:24:04.647055 kubelet[1929]: E0517 00:24:04.646892 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:04.788343 kubelet[1929]: E0517 00:24:04.787911 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bvml" podUID="77e2009c-40a3-47c4-b9d0-5b99ba0c6d66" May 17 00:24:04.824048 containerd[1589]: time="2025-05-17T00:24:04.823372049Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\"" May 17 00:24:05.132630 systemd-resolved[1484]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. May 17 00:24:05.647396 kubelet[1929]: E0517 00:24:05.647313 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:06.647555 kubelet[1929]: E0517 00:24:06.647493 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:06.788594 kubelet[1929]: E0517 00:24:06.788541 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bvml" podUID="77e2009c-40a3-47c4-b9d0-5b99ba0c6d66" May 17 00:24:07.648667 kubelet[1929]: E0517 00:24:07.648621 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:08.182807 containerd[1589]: time="2025-05-17T00:24:08.182726462Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.0: active requests=0, bytes read=70300568" May 17 00:24:08.184002 containerd[1589]: time="2025-05-17T00:24:08.183948422Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:08.186294 containerd[1589]: time="2025-05-17T00:24:08.186218320Z" level=info msg="ImageCreate event name:\"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:08.187062 containerd[1589]: time="2025-05-17T00:24:08.187027806Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.0\" with image id \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\", size \"71793271\" in 3.363601542s" May 17 00:24:08.187325 containerd[1589]: time="2025-05-17T00:24:08.187175040Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.0\" returns image reference \"sha256:15f996c472622f23047ea38b2d72940e8c34d0996b8a2e12a1f255c1d7083185\"" May 17 00:24:08.187692 containerd[1589]: time="2025-05-17T00:24:08.187653849Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:3dd06656abdc03fbd51782d5f6fe4d70e6825a1c0c5bce2a165bbd2ff9e0f7df\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:08.190942 containerd[1589]: time="2025-05-17T00:24:08.190879658Z" level=info msg="CreateContainer within sandbox \"23b4663775fa6e8aaca590e0b6ece1dd9e49d957570fa425faa93893e9243905\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:24:08.223017 containerd[1589]: time="2025-05-17T00:24:08.222338831Z" level=info msg="CreateContainer within sandbox \"23b4663775fa6e8aaca590e0b6ece1dd9e49d957570fa425faa93893e9243905\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"89de75c44bfac4f6de02a9258eff4d1b0f2360ad9bad88411fda2f90bf1cfeec\"" May 17 00:24:08.225275 containerd[1589]: time="2025-05-17T00:24:08.224243219Z" level=info msg="StartContainer for \"89de75c44bfac4f6de02a9258eff4d1b0f2360ad9bad88411fda2f90bf1cfeec\"" May 17 00:24:08.273932 systemd[1]: run-containerd-runc-k8s.io-89de75c44bfac4f6de02a9258eff4d1b0f2360ad9bad88411fda2f90bf1cfeec-runc.gfm3qq.mount: Deactivated successfully. May 17 00:24:08.327269 containerd[1589]: time="2025-05-17T00:24:08.325575149Z" level=info msg="StartContainer for \"89de75c44bfac4f6de02a9258eff4d1b0f2360ad9bad88411fda2f90bf1cfeec\" returns successfully" May 17 00:24:08.650485 kubelet[1929]: E0517 00:24:08.649923 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:08.788839 kubelet[1929]: E0517 00:24:08.788274 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-4bvml" podUID="77e2009c-40a3-47c4-b9d0-5b99ba0c6d66" May 17 00:24:09.213231 containerd[1589]: time="2025-05-17T00:24:09.212885222Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:24:09.241240 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-89de75c44bfac4f6de02a9258eff4d1b0f2360ad9bad88411fda2f90bf1cfeec-rootfs.mount: Deactivated successfully. May 17 00:24:09.248776 kubelet[1929]: I0517 00:24:09.248430 1929 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:24:09.311696 containerd[1589]: time="2025-05-17T00:24:09.311567302Z" level=info msg="shim disconnected" id=89de75c44bfac4f6de02a9258eff4d1b0f2360ad9bad88411fda2f90bf1cfeec namespace=k8s.io May 17 00:24:09.311696 containerd[1589]: time="2025-05-17T00:24:09.311648854Z" level=warning msg="cleaning up after shim disconnected" id=89de75c44bfac4f6de02a9258eff4d1b0f2360ad9bad88411fda2f90bf1cfeec namespace=k8s.io May 17 00:24:09.311696 containerd[1589]: time="2025-05-17T00:24:09.311667367Z" level=info msg="cleaning up dead shim" namespace=k8s.io May 17 00:24:09.650326 kubelet[1929]: E0517 00:24:09.650119 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:09.842054 containerd[1589]: time="2025-05-17T00:24:09.841977116Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\"" May 17 00:24:09.843992 systemd-resolved[1484]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.2. May 17 00:24:10.650815 kubelet[1929]: E0517 00:24:10.650702 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:10.792032 containerd[1589]: time="2025-05-17T00:24:10.791925494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4bvml,Uid:77e2009c-40a3-47c4-b9d0-5b99ba0c6d66,Namespace:calico-system,Attempt:0,}" May 17 00:24:10.887710 containerd[1589]: time="2025-05-17T00:24:10.887636450Z" level=error msg="Failed to destroy network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:10.890667 containerd[1589]: time="2025-05-17T00:24:10.890580080Z" level=error msg="encountered an error cleaning up failed sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:10.890981 containerd[1589]: time="2025-05-17T00:24:10.890883027Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4bvml,Uid:77e2009c-40a3-47c4-b9d0-5b99ba0c6d66,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:10.891872 kubelet[1929]: E0517 00:24:10.891336 1929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:10.891872 kubelet[1929]: E0517 00:24:10.891437 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4bvml" May 17 00:24:10.891872 kubelet[1929]: E0517 00:24:10.891467 1929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-4bvml" May 17 00:24:10.891565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce-shm.mount: Deactivated successfully. May 17 00:24:10.893753 kubelet[1929]: E0517 00:24:10.891540 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-4bvml_calico-system(77e2009c-40a3-47c4-b9d0-5b99ba0c6d66)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-4bvml_calico-system(77e2009c-40a3-47c4-b9d0-5b99ba0c6d66)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4bvml" podUID="77e2009c-40a3-47c4-b9d0-5b99ba0c6d66" May 17 00:24:11.126142 kubelet[1929]: I0517 00:24:11.125954 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4bjp6\" (UniqueName: \"kubernetes.io/projected/e5d5f111-1f86-4cc1-9a80-2568f9fbd404-kube-api-access-4bjp6\") pod \"nginx-deployment-8587fbcb89-pkppr\" (UID: \"e5d5f111-1f86-4cc1-9a80-2568f9fbd404\") " pod="default/nginx-deployment-8587fbcb89-pkppr" May 17 00:24:11.330814 containerd[1589]: time="2025-05-17T00:24:11.330273829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pkppr,Uid:e5d5f111-1f86-4cc1-9a80-2568f9fbd404,Namespace:default,Attempt:0,}" May 17 00:24:11.447727 containerd[1589]: time="2025-05-17T00:24:11.447579534Z" level=error msg="Failed to destroy network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:11.448765 containerd[1589]: time="2025-05-17T00:24:11.447939851Z" level=error msg="encountered an error cleaning up failed sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:11.448765 containerd[1589]: time="2025-05-17T00:24:11.447998392Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pkppr,Uid:e5d5f111-1f86-4cc1-9a80-2568f9fbd404,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:11.449007 kubelet[1929]: E0517 00:24:11.448404 1929 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:11.449007 kubelet[1929]: E0517 00:24:11.448471 1929 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pkppr" May 17 00:24:11.449007 kubelet[1929]: E0517 00:24:11.448492 1929 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-8587fbcb89-pkppr" May 17 00:24:11.449106 kubelet[1929]: E0517 00:24:11.448533 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-8587fbcb89-pkppr_default(e5d5f111-1f86-4cc1-9a80-2568f9fbd404)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-8587fbcb89-pkppr_default(e5d5f111-1f86-4cc1-9a80-2568f9fbd404)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pkppr" podUID="e5d5f111-1f86-4cc1-9a80-2568f9fbd404" May 17 00:24:11.651814 kubelet[1929]: E0517 00:24:11.651751 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:11.807093 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13-shm.mount: Deactivated successfully. May 17 00:24:11.846629 kubelet[1929]: I0517 00:24:11.845891 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:11.847112 containerd[1589]: time="2025-05-17T00:24:11.847068156Z" level=info msg="StopPodSandbox for \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\"" May 17 00:24:11.848258 containerd[1589]: time="2025-05-17T00:24:11.847327668Z" level=info msg="Ensure that sandbox 09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13 in task-service has been cleanup successfully" May 17 00:24:11.851786 kubelet[1929]: I0517 00:24:11.851756 1929 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:11.853682 containerd[1589]: time="2025-05-17T00:24:11.853642199Z" level=info msg="StopPodSandbox for \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\"" May 17 00:24:11.854017 containerd[1589]: time="2025-05-17T00:24:11.853994558Z" level=info msg="Ensure that sandbox 473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce in task-service has been cleanup successfully" May 17 00:24:11.903158 containerd[1589]: time="2025-05-17T00:24:11.902971040Z" level=error msg="StopPodSandbox for \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\" failed" error="failed to destroy network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:11.903358 kubelet[1929]: E0517 00:24:11.903304 1929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:11.903403 kubelet[1929]: E0517 00:24:11.903363 1929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce"} May 17 00:24:11.903438 kubelet[1929]: E0517 00:24:11.903423 1929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:11.903506 kubelet[1929]: E0517 00:24:11.903449 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-4bvml" podUID="77e2009c-40a3-47c4-b9d0-5b99ba0c6d66" May 17 00:24:11.914983 containerd[1589]: time="2025-05-17T00:24:11.914527394Z" level=error msg="StopPodSandbox for \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\" failed" error="failed to destroy network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" May 17 00:24:11.915150 kubelet[1929]: E0517 00:24:11.914791 1929 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:11.915150 kubelet[1929]: E0517 00:24:11.914847 1929 kuberuntime_manager.go:1479] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13"} May 17 00:24:11.915150 kubelet[1929]: E0517 00:24:11.914900 1929 kuberuntime_manager.go:1079] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e5d5f111-1f86-4cc1-9a80-2568f9fbd404\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" May 17 00:24:11.915150 kubelet[1929]: E0517 00:24:11.914925 1929 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e5d5f111-1f86-4cc1-9a80-2568f9fbd404\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-8587fbcb89-pkppr" podUID="e5d5f111-1f86-4cc1-9a80-2568f9fbd404" May 17 00:24:12.652220 kubelet[1929]: E0517 00:24:12.652034 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:13.652247 kubelet[1929]: E0517 00:24:13.652172 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:14.652990 kubelet[1929]: E0517 00:24:14.652929 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:15.653277 kubelet[1929]: E0517 00:24:15.653208 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:16.321743 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2798107444.mount: Deactivated successfully. May 17 00:24:16.364711 containerd[1589]: time="2025-05-17T00:24:16.364647537Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:16.366577 containerd[1589]: time="2025-05-17T00:24:16.366479801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.0: active requests=0, bytes read=156396372" May 17 00:24:16.369213 containerd[1589]: time="2025-05-17T00:24:16.367998980Z" level=info msg="ImageCreate event name:\"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:16.370353 containerd[1589]: time="2025-05-17T00:24:16.370308885Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:16.371370 containerd[1589]: time="2025-05-17T00:24:16.371324399Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.0\" with image id \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node@sha256:7cb61ea47ca0a8e6d0526a42da4f1e399b37ccd13339d0776d272465cb7ee012\", size \"156396234\" in 6.529297266s" May 17 00:24:16.371459 containerd[1589]: time="2025-05-17T00:24:16.371382614Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.0\" returns image reference \"sha256:d12dae9bc0999225efe30fd5618bcf2195709d54ed2840234f5006aab5f7d721\"" May 17 00:24:16.405800 containerd[1589]: time="2025-05-17T00:24:16.405755860Z" level=info msg="CreateContainer within sandbox \"23b4663775fa6e8aaca590e0b6ece1dd9e49d957570fa425faa93893e9243905\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" May 17 00:24:16.428269 containerd[1589]: time="2025-05-17T00:24:16.428211972Z" level=info msg="CreateContainer within sandbox \"23b4663775fa6e8aaca590e0b6ece1dd9e49d957570fa425faa93893e9243905\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"94594aa3058527eaa6e38e557272918bac1a80b23742fe8f422f35d816d584f6\"" May 17 00:24:16.430730 containerd[1589]: time="2025-05-17T00:24:16.429046477Z" level=info msg="StartContainer for \"94594aa3058527eaa6e38e557272918bac1a80b23742fe8f422f35d816d584f6\"" May 17 00:24:16.555527 containerd[1589]: time="2025-05-17T00:24:16.555474830Z" level=info msg="StartContainer for \"94594aa3058527eaa6e38e557272918bac1a80b23742fe8f422f35d816d584f6\" returns successfully" May 17 00:24:16.636930 kubelet[1929]: E0517 00:24:16.636777 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:16.654163 kubelet[1929]: E0517 00:24:16.654087 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:16.681006 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. May 17 00:24:16.681204 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. May 17 00:24:16.900734 kubelet[1929]: I0517 00:24:16.900561 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-wfrz6" podStartSLOduration=3.61637281 podStartE2EDuration="19.900538693s" podCreationTimestamp="2025-05-17 00:23:57 +0000 UTC" firstStartedPulling="2025-05-17 00:24:00.088420402 +0000 UTC m=+3.867172269" lastFinishedPulling="2025-05-17 00:24:16.372586292 +0000 UTC m=+20.151338152" observedRunningTime="2025-05-17 00:24:16.900272983 +0000 UTC m=+20.679024857" watchObservedRunningTime="2025-05-17 00:24:16.900538693 +0000 UTC m=+20.679290564" May 17 00:24:17.655083 kubelet[1929]: E0517 00:24:17.654995 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:17.869362 kubelet[1929]: I0517 00:24:17.869131 1929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:18.574264 kernel: bpftool[2682]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set May 17 00:24:18.656160 kubelet[1929]: E0517 00:24:18.656106 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:18.935538 systemd-networkd[1224]: vxlan.calico: Link UP May 17 00:24:18.935550 systemd-networkd[1224]: vxlan.calico: Gained carrier May 17 00:24:19.656729 kubelet[1929]: E0517 00:24:19.656639 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:20.657662 kubelet[1929]: E0517 00:24:20.657582 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:20.748626 systemd-networkd[1224]: vxlan.calico: Gained IPv6LL May 17 00:24:20.837914 kubelet[1929]: I0517 00:24:20.837852 1929 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" May 17 00:24:21.658745 kubelet[1929]: E0517 00:24:21.658677 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:22.659306 kubelet[1929]: E0517 00:24:22.659228 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:23.660272 kubelet[1929]: E0517 00:24:23.660208 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:24.661253 kubelet[1929]: E0517 00:24:24.661163 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:24.789639 containerd[1589]: time="2025-05-17T00:24:24.789587137Z" level=info msg="StopPodSandbox for \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\"" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.889 [INFO][2815] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.889 [INFO][2815] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" iface="eth0" netns="/var/run/netns/cni-7a73e8af-3b73-67b1-cf6f-c1de78c4c59f" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.890 [INFO][2815] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" iface="eth0" netns="/var/run/netns/cni-7a73e8af-3b73-67b1-cf6f-c1de78c4c59f" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.890 [INFO][2815] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" iface="eth0" netns="/var/run/netns/cni-7a73e8af-3b73-67b1-cf6f-c1de78c4c59f" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.890 [INFO][2815] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.890 [INFO][2815] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.920 [INFO][2822] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.920 [INFO][2822] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.920 [INFO][2822] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.964 [WARNING][2822] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.964 [INFO][2822] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.979 [INFO][2822] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:24.984596 containerd[1589]: 2025-05-17 00:24:24.982 [INFO][2815] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:24.988143 containerd[1589]: time="2025-05-17T00:24:24.984600064Z" level=info msg="TearDown network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\" successfully" May 17 00:24:24.988143 containerd[1589]: time="2025-05-17T00:24:24.984631261Z" level=info msg="StopPodSandbox for \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\" returns successfully" May 17 00:24:24.988143 containerd[1589]: time="2025-05-17T00:24:24.987298819Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pkppr,Uid:e5d5f111-1f86-4cc1-9a80-2568f9fbd404,Namespace:default,Attempt:1,}" May 17 00:24:24.989809 systemd[1]: run-netns-cni\x2d7a73e8af\x2d3b73\x2d67b1\x2dcf6f\x2dc1de78c4c59f.mount: Deactivated successfully. May 17 00:24:25.155056 systemd-networkd[1224]: caliecb575ea1fe: Link UP May 17 00:24:25.157722 systemd-networkd[1224]: caliecb575ea1fe: Gained carrier May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.054 [INFO][2831] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0 nginx-deployment-8587fbcb89- default e5d5f111-1f86-4cc1-9a80-2568f9fbd404 1300 0 2025-05-17 00:24:11 +0000 UTC map[app:nginx pod-template-hash:8587fbcb89 projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.198.108.0 nginx-deployment-8587fbcb89-pkppr eth0 default [] [] [kns.default ksa.default.default] caliecb575ea1fe [] [] }} ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Namespace="default" Pod="nginx-deployment-8587fbcb89-pkppr" WorkloadEndpoint="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.054 [INFO][2831] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Namespace="default" Pod="nginx-deployment-8587fbcb89-pkppr" WorkloadEndpoint="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.095 [INFO][2842] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" HandleID="k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.095 [INFO][2842] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" HandleID="k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333ab0), Attrs:map[string]string{"namespace":"default", "node":"143.198.108.0", "pod":"nginx-deployment-8587fbcb89-pkppr", "timestamp":"2025-05-17 00:24:25.095055704 +0000 UTC"}, Hostname:"143.198.108.0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.095 [INFO][2842] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.095 [INFO][2842] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.095 [INFO][2842] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.108.0' May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.106 [INFO][2842] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.114 [INFO][2842] ipam/ipam.go 394: Looking up existing affinities for host host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.122 [INFO][2842] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.125 [INFO][2842] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.128 [INFO][2842] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.128 [INFO][2842] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.130 [INFO][2842] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44 May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.136 [INFO][2842] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.147 [INFO][2842] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.193/26] block=192.168.120.192/26 handle="k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.148 [INFO][2842] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.193/26] handle="k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" host="143.198.108.0" May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.148 [INFO][2842] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:25.179432 containerd[1589]: 2025-05-17 00:24:25.148 [INFO][2842] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.193/26] IPv6=[] ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" HandleID="k8s-pod-network.d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:25.180125 containerd[1589]: 2025-05-17 00:24:25.150 [INFO][2831] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Namespace="default" Pod="nginx-deployment-8587fbcb89-pkppr" WorkloadEndpoint="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"e5d5f111-1f86-4cc1-9a80-2568f9fbd404", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"", Pod:"nginx-deployment-8587fbcb89-pkppr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliecb575ea1fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:25.180125 containerd[1589]: 2025-05-17 00:24:25.150 [INFO][2831] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.193/32] ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Namespace="default" Pod="nginx-deployment-8587fbcb89-pkppr" WorkloadEndpoint="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:25.180125 containerd[1589]: 2025-05-17 00:24:25.150 [INFO][2831] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliecb575ea1fe ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Namespace="default" Pod="nginx-deployment-8587fbcb89-pkppr" WorkloadEndpoint="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:25.180125 containerd[1589]: 2025-05-17 00:24:25.158 [INFO][2831] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Namespace="default" Pod="nginx-deployment-8587fbcb89-pkppr" WorkloadEndpoint="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:25.180125 containerd[1589]: 2025-05-17 00:24:25.159 [INFO][2831] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Namespace="default" Pod="nginx-deployment-8587fbcb89-pkppr" WorkloadEndpoint="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"e5d5f111-1f86-4cc1-9a80-2568f9fbd404", ResourceVersion:"1300", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44", Pod:"nginx-deployment-8587fbcb89-pkppr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliecb575ea1fe", MAC:"ce:1a:14:42:5c:18", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:25.180125 containerd[1589]: 2025-05-17 00:24:25.173 [INFO][2831] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44" Namespace="default" Pod="nginx-deployment-8587fbcb89-pkppr" WorkloadEndpoint="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:25.212223 containerd[1589]: time="2025-05-17T00:24:25.212085432Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:25.212223 containerd[1589]: time="2025-05-17T00:24:25.212152221Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:25.212223 containerd[1589]: time="2025-05-17T00:24:25.212168864Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:25.212556 containerd[1589]: time="2025-05-17T00:24:25.212321261Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:25.243722 systemd[1]: run-containerd-runc-k8s.io-d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44-runc.1WqKAz.mount: Deactivated successfully. May 17 00:24:25.303573 containerd[1589]: time="2025-05-17T00:24:25.303407358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8587fbcb89-pkppr,Uid:e5d5f111-1f86-4cc1-9a80-2568f9fbd404,Namespace:default,Attempt:1,} returns sandbox id \"d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44\"" May 17 00:24:25.306499 containerd[1589]: time="2025-05-17T00:24:25.306365118Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:24:25.662482 kubelet[1929]: E0517 00:24:25.662408 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:26.663372 kubelet[1929]: E0517 00:24:26.663287 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:26.791216 containerd[1589]: time="2025-05-17T00:24:26.789726694Z" level=info msg="StopPodSandbox for \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\"" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.918 [INFO][2918] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.919 [INFO][2918] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" iface="eth0" netns="/var/run/netns/cni-901a992d-337a-d71d-fed6-fbb9098ce460" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.919 [INFO][2918] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" iface="eth0" netns="/var/run/netns/cni-901a992d-337a-d71d-fed6-fbb9098ce460" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.919 [INFO][2918] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" iface="eth0" netns="/var/run/netns/cni-901a992d-337a-d71d-fed6-fbb9098ce460" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.919 [INFO][2918] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.919 [INFO][2918] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.976 [INFO][2926] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.978 [INFO][2926] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.978 [INFO][2926] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.992 [WARNING][2926] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.992 [INFO][2926] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.997 [INFO][2926] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.006242 containerd[1589]: 2025-05-17 00:24:26.999 [INFO][2918] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:27.006981 containerd[1589]: time="2025-05-17T00:24:27.006915923Z" level=info msg="TearDown network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\" successfully" May 17 00:24:27.007097 containerd[1589]: time="2025-05-17T00:24:27.006974060Z" level=info msg="StopPodSandbox for \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\" returns successfully" May 17 00:24:27.008780 systemd[1]: run-netns-cni\x2d901a992d\x2d337a\x2dd71d\x2dfed6\x2dfbb9098ce460.mount: Deactivated successfully. May 17 00:24:27.011733 containerd[1589]: time="2025-05-17T00:24:27.011166223Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4bvml,Uid:77e2009c-40a3-47c4-b9d0-5b99ba0c6d66,Namespace:calico-system,Attempt:1,}" May 17 00:24:27.085145 systemd-networkd[1224]: caliecb575ea1fe: Gained IPv6LL May 17 00:24:27.267532 systemd-networkd[1224]: calif3984c050ff: Link UP May 17 00:24:27.268822 systemd-networkd[1224]: calif3984c050ff: Gained carrier May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.114 [INFO][2933] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.108.0-k8s-csi--node--driver--4bvml-eth0 csi-node-driver- calico-system 77e2009c-40a3-47c4-b9d0-5b99ba0c6d66 1311 0 2025-05-17 00:23:57 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:68bf44dd5 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 143.198.108.0 csi-node-driver-4bvml eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif3984c050ff [] [] }} ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Namespace="calico-system" Pod="csi-node-driver-4bvml" WorkloadEndpoint="143.198.108.0-k8s-csi--node--driver--4bvml-" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.115 [INFO][2933] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Namespace="calico-system" Pod="csi-node-driver-4bvml" WorkloadEndpoint="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.191 [INFO][2945] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" HandleID="k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.191 [INFO][2945] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" HandleID="k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00004f780), Attrs:map[string]string{"namespace":"calico-system", "node":"143.198.108.0", "pod":"csi-node-driver-4bvml", "timestamp":"2025-05-17 00:24:27.19143851 +0000 UTC"}, Hostname:"143.198.108.0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.191 [INFO][2945] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.191 [INFO][2945] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.191 [INFO][2945] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.108.0' May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.205 [INFO][2945] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.212 [INFO][2945] ipam/ipam.go 394: Looking up existing affinities for host host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.220 [INFO][2945] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.223 [INFO][2945] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.228 [INFO][2945] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.228 [INFO][2945] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.231 [INFO][2945] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40 May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.241 [INFO][2945] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.253 [INFO][2945] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.194/26] block=192.168.120.192/26 handle="k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.253 [INFO][2945] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.194/26] handle="k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" host="143.198.108.0" May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.253 [INFO][2945] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:27.288927 containerd[1589]: 2025-05-17 00:24:27.253 [INFO][2945] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.194/26] IPv6=[] ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" HandleID="k8s-pod-network.57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.289663 containerd[1589]: 2025-05-17 00:24:27.255 [INFO][2933] cni-plugin/k8s.go 418: Populated endpoint ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Namespace="calico-system" Pod="csi-node-driver-4bvml" WorkloadEndpoint="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-csi--node--driver--4bvml-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66", ResourceVersion:"1311", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"", Pod:"csi-node-driver-4bvml", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3984c050ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.289663 containerd[1589]: 2025-05-17 00:24:27.255 [INFO][2933] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.194/32] ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Namespace="calico-system" Pod="csi-node-driver-4bvml" WorkloadEndpoint="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.289663 containerd[1589]: 2025-05-17 00:24:27.255 [INFO][2933] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif3984c050ff ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Namespace="calico-system" Pod="csi-node-driver-4bvml" WorkloadEndpoint="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.289663 containerd[1589]: 2025-05-17 00:24:27.269 [INFO][2933] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Namespace="calico-system" Pod="csi-node-driver-4bvml" WorkloadEndpoint="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.289663 containerd[1589]: 2025-05-17 00:24:27.271 [INFO][2933] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Namespace="calico-system" Pod="csi-node-driver-4bvml" WorkloadEndpoint="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-csi--node--driver--4bvml-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66", ResourceVersion:"1311", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40", Pod:"csi-node-driver-4bvml", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3984c050ff", MAC:"42:58:35:88:19:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:27.289663 containerd[1589]: 2025-05-17 00:24:27.284 [INFO][2933] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40" Namespace="calico-system" Pod="csi-node-driver-4bvml" WorkloadEndpoint="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:27.329889 containerd[1589]: time="2025-05-17T00:24:27.328676692Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:27.330197 containerd[1589]: time="2025-05-17T00:24:27.330151970Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:27.330302 containerd[1589]: time="2025-05-17T00:24:27.330280434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:27.331035 containerd[1589]: time="2025-05-17T00:24:27.330976825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:27.401761 containerd[1589]: time="2025-05-17T00:24:27.401711929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-4bvml,Uid:77e2009c-40a3-47c4-b9d0-5b99ba0c6d66,Namespace:calico-system,Attempt:1,} returns sandbox id \"57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40\"" May 17 00:24:27.663964 kubelet[1929]: E0517 00:24:27.663909 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:28.257832 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4286471549.mount: Deactivated successfully. May 17 00:24:28.622323 systemd-networkd[1224]: calif3984c050ff: Gained IPv6LL May 17 00:24:28.665232 kubelet[1929]: E0517 00:24:28.665069 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:29.666212 kubelet[1929]: E0517 00:24:29.665565 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:29.748198 containerd[1589]: time="2025-05-17T00:24:29.748092697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:29.749540 containerd[1589]: time="2025-05-17T00:24:29.749336048Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73306220" May 17 00:24:29.750243 containerd[1589]: time="2025-05-17T00:24:29.750161762Z" level=info msg="ImageCreate event name:\"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:29.753538 containerd[1589]: time="2025-05-17T00:24:29.753454773Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:29.754978 containerd[1589]: time="2025-05-17T00:24:29.754519946Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 4.448119001s" May 17 00:24:29.754978 containerd[1589]: time="2025-05-17T00:24:29.754590602Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:24:29.756872 containerd[1589]: time="2025-05-17T00:24:29.756843100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\"" May 17 00:24:29.757836 containerd[1589]: time="2025-05-17T00:24:29.757797134Z" level=info msg="CreateContainer within sandbox \"d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" May 17 00:24:29.774600 containerd[1589]: time="2025-05-17T00:24:29.774523381Z" level=info msg="CreateContainer within sandbox \"d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"3ccf503b9b823f180c550cb403e645f6eeca43a25185713a794e3015fba131c9\"" May 17 00:24:29.776385 containerd[1589]: time="2025-05-17T00:24:29.775579909Z" level=info msg="StartContainer for \"3ccf503b9b823f180c550cb403e645f6eeca43a25185713a794e3015fba131c9\"" May 17 00:24:29.858421 containerd[1589]: time="2025-05-17T00:24:29.858357891Z" level=info msg="StartContainer for \"3ccf503b9b823f180c550cb403e645f6eeca43a25185713a794e3015fba131c9\" returns successfully" May 17 00:24:30.666252 kubelet[1929]: E0517 00:24:30.666160 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:31.332359 containerd[1589]: time="2025-05-17T00:24:31.332295836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:31.334549 containerd[1589]: time="2025-05-17T00:24:31.334478478Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.0: active requests=0, bytes read=8758390" May 17 00:24:31.336752 containerd[1589]: time="2025-05-17T00:24:31.335457192Z" level=info msg="ImageCreate event name:\"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:31.337637 containerd[1589]: time="2025-05-17T00:24:31.337564037Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:31.338355 containerd[1589]: time="2025-05-17T00:24:31.338317908Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.0\" with image id \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:27883a4104876fe239311dd93ce6efd0c4a87de7163d57a4c8d96bd65a287ffd\", size \"10251093\" in 1.581313881s" May 17 00:24:31.338414 containerd[1589]: time="2025-05-17T00:24:31.338365936Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.0\" returns image reference \"sha256:d5b08093b7928c0ac1122e59edf69b2e58c6d10ecc8b9e5cffeb809a956dc48e\"" May 17 00:24:31.341856 containerd[1589]: time="2025-05-17T00:24:31.341708480Z" level=info msg="CreateContainer within sandbox \"57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" May 17 00:24:31.357725 containerd[1589]: time="2025-05-17T00:24:31.357581578Z" level=info msg="CreateContainer within sandbox \"57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"cacdfb4984944719ff9391ff3ba46bc3d7d3cf688f46bb29db251cd50b73fd8e\"" May 17 00:24:31.359229 containerd[1589]: time="2025-05-17T00:24:31.358507000Z" level=info msg="StartContainer for \"cacdfb4984944719ff9391ff3ba46bc3d7d3cf688f46bb29db251cd50b73fd8e\"" May 17 00:24:31.433698 containerd[1589]: time="2025-05-17T00:24:31.433648578Z" level=info msg="StartContainer for \"cacdfb4984944719ff9391ff3ba46bc3d7d3cf688f46bb29db251cd50b73fd8e\" returns successfully" May 17 00:24:31.435605 containerd[1589]: time="2025-05-17T00:24:31.435570017Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\"" May 17 00:24:31.666712 kubelet[1929]: E0517 00:24:31.666666 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:32.667908 kubelet[1929]: E0517 00:24:32.667598 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:33.037454 containerd[1589]: time="2025-05-17T00:24:33.036617674Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:33.037454 containerd[1589]: time="2025-05-17T00:24:33.037290219Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0: active requests=0, bytes read=14705639" May 17 00:24:33.040222 containerd[1589]: time="2025-05-17T00:24:33.039452465Z" level=info msg="ImageCreate event name:\"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:33.041605 containerd[1589]: time="2025-05-17T00:24:33.041542401Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:33.042507 containerd[1589]: time="2025-05-17T00:24:33.042355204Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" with image id \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:dca5c16181edde2e860463615523ce457cd9dcfca85b7cfdcd6f3ea7de6f2ac8\", size \"16198294\" in 1.60643694s" May 17 00:24:33.042507 containerd[1589]: time="2025-05-17T00:24:33.042394831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.0\" returns image reference \"sha256:45c8692ffc029387ee93ba83da8ad26da9749cf2ba6ed03981f8f9933ed5a5b0\"" May 17 00:24:33.046420 containerd[1589]: time="2025-05-17T00:24:33.046336480Z" level=info msg="CreateContainer within sandbox \"57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" May 17 00:24:33.069061 containerd[1589]: time="2025-05-17T00:24:33.068965058Z" level=info msg="CreateContainer within sandbox \"57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"96e39c9fd4e7471bad80eabda4d6f29ce563a9fc671677669354d0bdf49821c5\"" May 17 00:24:33.070108 containerd[1589]: time="2025-05-17T00:24:33.070032496Z" level=info msg="StartContainer for \"96e39c9fd4e7471bad80eabda4d6f29ce563a9fc671677669354d0bdf49821c5\"" May 17 00:24:33.107871 systemd[1]: run-containerd-runc-k8s.io-96e39c9fd4e7471bad80eabda4d6f29ce563a9fc671677669354d0bdf49821c5-runc.hHOqEt.mount: Deactivated successfully. May 17 00:24:33.144998 containerd[1589]: time="2025-05-17T00:24:33.144796011Z" level=info msg="StartContainer for \"96e39c9fd4e7471bad80eabda4d6f29ce563a9fc671677669354d0bdf49821c5\" returns successfully" May 17 00:24:33.668106 kubelet[1929]: E0517 00:24:33.668042 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:33.781367 kubelet[1929]: I0517 00:24:33.781318 1929 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 May 17 00:24:33.781367 kubelet[1929]: I0517 00:24:33.781364 1929 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock May 17 00:24:33.974143 kubelet[1929]: I0517 00:24:33.973816 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-8587fbcb89-pkppr" podStartSLOduration=18.523299705 podStartE2EDuration="22.973795937s" podCreationTimestamp="2025-05-17 00:24:11 +0000 UTC" firstStartedPulling="2025-05-17 00:24:25.305518737 +0000 UTC m=+29.084270577" lastFinishedPulling="2025-05-17 00:24:29.756014952 +0000 UTC m=+33.534766809" observedRunningTime="2025-05-17 00:24:29.936878243 +0000 UTC m=+33.715630115" watchObservedRunningTime="2025-05-17 00:24:33.973795937 +0000 UTC m=+37.752547793" May 17 00:24:34.612083 update_engine[1572]: I20250517 00:24:34.611536 1572 update_attempter.cc:509] Updating boot flags... May 17 00:24:34.650853 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3175) May 17 00:24:34.669249 kubelet[1929]: E0517 00:24:34.669016 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:34.717953 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3174) May 17 00:24:34.766670 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 38 scanned by (udev-worker) (3174) May 17 00:24:35.669693 kubelet[1929]: E0517 00:24:35.669609 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:36.636875 kubelet[1929]: E0517 00:24:36.636799 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:36.670869 kubelet[1929]: E0517 00:24:36.670797 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:37.671572 kubelet[1929]: E0517 00:24:37.671503 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:38.609366 kubelet[1929]: I0517 00:24:38.609281 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-4bvml" podStartSLOduration=35.969940279 podStartE2EDuration="41.609257054s" podCreationTimestamp="2025-05-17 00:23:57 +0000 UTC" firstStartedPulling="2025-05-17 00:24:27.404774816 +0000 UTC m=+31.183526657" lastFinishedPulling="2025-05-17 00:24:33.04409159 +0000 UTC m=+36.822843432" observedRunningTime="2025-05-17 00:24:33.974352206 +0000 UTC m=+37.753104066" watchObservedRunningTime="2025-05-17 00:24:38.609257054 +0000 UTC m=+42.388008934" May 17 00:24:38.671816 kubelet[1929]: E0517 00:24:38.671682 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:38.737390 kubelet[1929]: I0517 00:24:38.737316 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/50922a82-5301-466f-8509-786ef52f9aa9-data\") pod \"nfs-server-provisioner-0\" (UID: \"50922a82-5301-466f-8509-786ef52f9aa9\") " pod="default/nfs-server-provisioner-0" May 17 00:24:38.737390 kubelet[1929]: I0517 00:24:38.737388 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zjfsc\" (UniqueName: \"kubernetes.io/projected/50922a82-5301-466f-8509-786ef52f9aa9-kube-api-access-zjfsc\") pod \"nfs-server-provisioner-0\" (UID: \"50922a82-5301-466f-8509-786ef52f9aa9\") " pod="default/nfs-server-provisioner-0" May 17 00:24:38.913621 containerd[1589]: time="2025-05-17T00:24:38.913517452Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:50922a82-5301-466f-8509-786ef52f9aa9,Namespace:default,Attempt:0,}" May 17 00:24:39.106840 systemd-networkd[1224]: cali60e51b789ff: Link UP May 17 00:24:39.108860 systemd-networkd[1224]: cali60e51b789ff: Gained carrier May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:38.988 [INFO][3191] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.108.0-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default 50922a82-5301-466f-8509-786ef52f9aa9 1385 0 2025-05-17 00:24:38 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 143.198.108.0 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] [] }} ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.108.0-k8s-nfs--server--provisioner--0-" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:38.988 [INFO][3191] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.032 [INFO][3202] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" HandleID="k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Workload="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.033 [INFO][3202] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" HandleID="k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Workload="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002ad4d0), Attrs:map[string]string{"namespace":"default", "node":"143.198.108.0", "pod":"nfs-server-provisioner-0", "timestamp":"2025-05-17 00:24:39.032960897 +0000 UTC"}, Hostname:"143.198.108.0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.033 [INFO][3202] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.033 [INFO][3202] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.033 [INFO][3202] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.108.0' May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.051 [INFO][3202] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.063 [INFO][3202] ipam/ipam.go 394: Looking up existing affinities for host host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.072 [INFO][3202] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.076 [INFO][3202] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.080 [INFO][3202] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.080 [INFO][3202] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.082 [INFO][3202] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4 May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.089 [INFO][3202] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.098 [INFO][3202] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.195/26] block=192.168.120.192/26 handle="k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.099 [INFO][3202] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.195/26] handle="k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" host="143.198.108.0" May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.099 [INFO][3202] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:39.125791 containerd[1589]: 2025-05-17 00:24:39.099 [INFO][3202] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.195/26] IPv6=[] ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" HandleID="k8s-pod-network.7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Workload="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" May 17 00:24:39.127799 containerd[1589]: 2025-05-17 00:24:39.101 [INFO][3191] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"50922a82-5301-466f-8509-786ef52f9aa9", ResourceVersion:"1385", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:39.127799 containerd[1589]: 2025-05-17 00:24:39.101 [INFO][3191] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.195/32] ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" May 17 00:24:39.127799 containerd[1589]: 2025-05-17 00:24:39.101 [INFO][3191] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" May 17 00:24:39.127799 containerd[1589]: 2025-05-17 00:24:39.108 [INFO][3191] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" May 17 00:24:39.128068 containerd[1589]: 2025-05-17 00:24:39.109 [INFO][3191] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"50922a82-5301-466f-8509-786ef52f9aa9", ResourceVersion:"1385", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.120.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3e:76:de:5b:61:61", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:39.128068 containerd[1589]: 2025-05-17 00:24:39.122 [INFO][3191] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="143.198.108.0-k8s-nfs--server--provisioner--0-eth0" May 17 00:24:39.151881 containerd[1589]: time="2025-05-17T00:24:39.150318457Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:39.151881 containerd[1589]: time="2025-05-17T00:24:39.150406138Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:39.151881 containerd[1589]: time="2025-05-17T00:24:39.150446285Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:39.151881 containerd[1589]: time="2025-05-17T00:24:39.150884868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:39.241633 containerd[1589]: time="2025-05-17T00:24:39.241519401Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:50922a82-5301-466f-8509-786ef52f9aa9,Namespace:default,Attempt:0,} returns sandbox id \"7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4\"" May 17 00:24:39.244807 containerd[1589]: time="2025-05-17T00:24:39.244440634Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" May 17 00:24:39.672732 kubelet[1929]: E0517 00:24:39.672670 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:40.332465 systemd-networkd[1224]: cali60e51b789ff: Gained IPv6LL May 17 00:24:40.673392 kubelet[1929]: E0517 00:24:40.673348 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:41.590027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1813678198.mount: Deactivated successfully. May 17 00:24:41.675135 kubelet[1929]: E0517 00:24:41.675074 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:42.676300 kubelet[1929]: E0517 00:24:42.676248 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:43.678390 kubelet[1929]: E0517 00:24:43.678297 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:44.134329 containerd[1589]: time="2025-05-17T00:24:44.132447267Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:44.134329 containerd[1589]: time="2025-05-17T00:24:44.133797812Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" May 17 00:24:44.136338 containerd[1589]: time="2025-05-17T00:24:44.136276091Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:44.142852 containerd[1589]: time="2025-05-17T00:24:44.142792695Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:44.143610 containerd[1589]: time="2025-05-17T00:24:44.143565900Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 4.899071556s" May 17 00:24:44.143833 containerd[1589]: time="2025-05-17T00:24:44.143812038Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" May 17 00:24:44.168931 containerd[1589]: time="2025-05-17T00:24:44.168863360Z" level=info msg="CreateContainer within sandbox \"7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" May 17 00:24:44.192686 containerd[1589]: time="2025-05-17T00:24:44.192631992Z" level=info msg="CreateContainer within sandbox \"7a896f0afa22687dc5d7c5c0370f90fd0f96e433b67fcc6e2e551698a3c0c0a4\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"075691b0090bc4cca2a99c40a94549a83dc6e254b7bf68a5f0a8574cdca846e0\"" May 17 00:24:44.194494 containerd[1589]: time="2025-05-17T00:24:44.193742129Z" level=info msg="StartContainer for \"075691b0090bc4cca2a99c40a94549a83dc6e254b7bf68a5f0a8574cdca846e0\"" May 17 00:24:44.242128 systemd[1]: run-containerd-runc-k8s.io-075691b0090bc4cca2a99c40a94549a83dc6e254b7bf68a5f0a8574cdca846e0-runc.W8V556.mount: Deactivated successfully. May 17 00:24:44.287133 containerd[1589]: time="2025-05-17T00:24:44.286984167Z" level=info msg="StartContainer for \"075691b0090bc4cca2a99c40a94549a83dc6e254b7bf68a5f0a8574cdca846e0\" returns successfully" May 17 00:24:44.679388 kubelet[1929]: E0517 00:24:44.679296 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:45.080389 kubelet[1929]: I0517 00:24:45.080056 1929 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=2.170580964 podStartE2EDuration="7.075666497s" podCreationTimestamp="2025-05-17 00:24:38 +0000 UTC" firstStartedPulling="2025-05-17 00:24:39.243470451 +0000 UTC m=+43.022222307" lastFinishedPulling="2025-05-17 00:24:44.148555973 +0000 UTC m=+47.927307840" observedRunningTime="2025-05-17 00:24:45.067470704 +0000 UTC m=+48.846222579" watchObservedRunningTime="2025-05-17 00:24:45.075666497 +0000 UTC m=+48.854418367" May 17 00:24:45.685328 kubelet[1929]: E0517 00:24:45.680390 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:46.686444 kubelet[1929]: E0517 00:24:46.686389 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:47.687558 kubelet[1929]: E0517 00:24:47.687460 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:48.688050 kubelet[1929]: E0517 00:24:48.687967 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:49.688940 kubelet[1929]: E0517 00:24:49.688854 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:50.689870 kubelet[1929]: E0517 00:24:50.689799 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:50.867647 systemd[1]: run-containerd-runc-k8s.io-94594aa3058527eaa6e38e557272918bac1a80b23742fe8f422f35d816d584f6-runc.UIW8xg.mount: Deactivated successfully. May 17 00:24:51.690965 kubelet[1929]: E0517 00:24:51.690891 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:52.691509 kubelet[1929]: E0517 00:24:52.691441 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:53.692700 kubelet[1929]: E0517 00:24:53.692599 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:54.239970 kubelet[1929]: I0517 00:24:54.239901 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngcn7\" (UniqueName: \"kubernetes.io/projected/8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0-kube-api-access-ngcn7\") pod \"test-pod-1\" (UID: \"8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0\") " pod="default/test-pod-1" May 17 00:24:54.239970 kubelet[1929]: I0517 00:24:54.239964 1929 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1889f2af-26f1-45b6-bd5a-31ee5e2a1a93\" (UniqueName: \"kubernetes.io/nfs/8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0-pvc-1889f2af-26f1-45b6-bd5a-31ee5e2a1a93\") pod \"test-pod-1\" (UID: \"8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0\") " pod="default/test-pod-1" May 17 00:24:54.388293 kernel: FS-Cache: Loaded May 17 00:24:54.470598 kernel: RPC: Registered named UNIX socket transport module. May 17 00:24:54.470724 kernel: RPC: Registered udp transport module. May 17 00:24:54.470744 kernel: RPC: Registered tcp transport module. May 17 00:24:54.470760 kernel: RPC: Registered tcp-with-tls transport module. May 17 00:24:54.470777 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. May 17 00:24:54.693520 kubelet[1929]: E0517 00:24:54.693430 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:54.756415 kernel: NFS: Registering the id_resolver key type May 17 00:24:54.756566 kernel: Key type id_resolver registered May 17 00:24:54.758305 kernel: Key type id_legacy registered May 17 00:24:54.865087 nfsidmap[3411]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.3-n-d1569f5c4a' May 17 00:24:54.871386 nfsidmap[3412]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.3-n-d1569f5c4a' May 17 00:24:55.003917 containerd[1589]: time="2025-05-17T00:24:55.003108017Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0,Namespace:default,Attempt:0,}" May 17 00:24:55.173063 systemd-networkd[1224]: cali5ec59c6bf6e: Link UP May 17 00:24:55.175086 systemd-networkd[1224]: cali5ec59c6bf6e: Gained carrier May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.075 [INFO][3413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {143.198.108.0-k8s-test--pod--1-eth0 default 8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0 1458 0 2025-05-17 00:24:40 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 143.198.108.0 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] [] }} ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.108.0-k8s-test--pod--1-" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.075 [INFO][3413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.108.0-k8s-test--pod--1-eth0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.110 [INFO][3426] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" HandleID="k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Workload="143.198.108.0-k8s-test--pod--1-eth0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.110 [INFO][3426] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" HandleID="k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Workload="143.198.108.0-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000233020), Attrs:map[string]string{"namespace":"default", "node":"143.198.108.0", "pod":"test-pod-1", "timestamp":"2025-05-17 00:24:55.110588363 +0000 UTC"}, Hostname:"143.198.108.0", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.110 [INFO][3426] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.110 [INFO][3426] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.110 [INFO][3426] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '143.198.108.0' May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.120 [INFO][3426] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.130 [INFO][3426] ipam/ipam.go 394: Looking up existing affinities for host host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.140 [INFO][3426] ipam/ipam.go 511: Trying affinity for 192.168.120.192/26 host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.144 [INFO][3426] ipam/ipam.go 158: Attempting to load block cidr=192.168.120.192/26 host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.149 [INFO][3426] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.120.192/26 host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.149 [INFO][3426] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.120.192/26 handle="k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.151 [INFO][3426] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2 May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.157 [INFO][3426] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.120.192/26 handle="k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.167 [INFO][3426] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.120.196/26] block=192.168.120.192/26 handle="k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.167 [INFO][3426] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.120.196/26] handle="k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" host="143.198.108.0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.167 [INFO][3426] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.167 [INFO][3426] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.120.196/26] IPv6=[] ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" HandleID="k8s-pod-network.0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Workload="143.198.108.0-k8s-test--pod--1-eth0" May 17 00:24:55.196661 containerd[1589]: 2025-05-17 00:24:55.169 [INFO][3413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.108.0-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0", ResourceVersion:"1458", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:55.197547 containerd[1589]: 2025-05-17 00:24:55.169 [INFO][3413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.120.196/32] ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.108.0-k8s-test--pod--1-eth0" May 17 00:24:55.197547 containerd[1589]: 2025-05-17 00:24:55.169 [INFO][3413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.108.0-k8s-test--pod--1-eth0" May 17 00:24:55.197547 containerd[1589]: 2025-05-17 00:24:55.174 [INFO][3413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.108.0-k8s-test--pod--1-eth0" May 17 00:24:55.197547 containerd[1589]: 2025-05-17 00:24:55.176 [INFO][3413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.108.0-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0", ResourceVersion:"1458", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"4a:65:c6:86:e0:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:55.197547 containerd[1589]: 2025-05-17 00:24:55.192 [INFO][3413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="143.198.108.0-k8s-test--pod--1-eth0" May 17 00:24:55.239141 containerd[1589]: time="2025-05-17T00:24:55.238968704Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:24:55.239141 containerd[1589]: time="2025-05-17T00:24:55.239100224Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:24:55.240320 containerd[1589]: time="2025-05-17T00:24:55.240044767Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:55.240320 containerd[1589]: time="2025-05-17T00:24:55.240254475Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:24:55.336615 containerd[1589]: time="2025-05-17T00:24:55.336515691Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:8b8ecfd6-1c9b-49c3-954f-d32b82f67dd0,Namespace:default,Attempt:0,} returns sandbox id \"0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2\"" May 17 00:24:55.338772 containerd[1589]: time="2025-05-17T00:24:55.338668777Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" May 17 00:24:55.660845 containerd[1589]: time="2025-05-17T00:24:55.659270360Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 17 00:24:55.660845 containerd[1589]: time="2025-05-17T00:24:55.660174806Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" May 17 00:24:55.664082 containerd[1589]: time="2025-05-17T00:24:55.664004899Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:beabce8f1782671ba500ddff99dd260fbf9c5ec85fb9c3162e35a3c40bafd023\", size \"73306098\" in 325.069127ms" May 17 00:24:55.664082 containerd[1589]: time="2025-05-17T00:24:55.664058485Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:7e2dd24abce21cd256091445aca4b7eb00774264c2b0a8714701dd7091509efa\"" May 17 00:24:55.666764 containerd[1589]: time="2025-05-17T00:24:55.666710056Z" level=info msg="CreateContainer within sandbox \"0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2\" for container &ContainerMetadata{Name:test,Attempt:0,}" May 17 00:24:55.695411 kubelet[1929]: E0517 00:24:55.693738 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:55.700401 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount440038706.mount: Deactivated successfully. May 17 00:24:55.706995 containerd[1589]: time="2025-05-17T00:24:55.706909327Z" level=info msg="CreateContainer within sandbox \"0bc3f937406e592d5e4393fff3fced4668aa17ef4e328a7a1d008cbf3a956dc2\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"06bb0b5688af21f59148ce8a585b748e3925dcef3dc69284ec5a379e05b8e7a6\"" May 17 00:24:55.713302 containerd[1589]: time="2025-05-17T00:24:55.712147229Z" level=info msg="StartContainer for \"06bb0b5688af21f59148ce8a585b748e3925dcef3dc69284ec5a379e05b8e7a6\"" May 17 00:24:55.778908 containerd[1589]: time="2025-05-17T00:24:55.778694562Z" level=info msg="StartContainer for \"06bb0b5688af21f59148ce8a585b748e3925dcef3dc69284ec5a379e05b8e7a6\" returns successfully" May 17 00:24:56.637295 kubelet[1929]: E0517 00:24:56.637231 1929 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:56.691227 containerd[1589]: time="2025-05-17T00:24:56.691112269Z" level=info msg="StopPodSandbox for \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\"" May 17 00:24:56.694452 kubelet[1929]: E0517 00:24:56.694362 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.765 [WARNING][3544] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-csi--node--driver--4bvml-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66", ResourceVersion:"1352", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40", Pod:"csi-node-driver-4bvml", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3984c050ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.765 [INFO][3544] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.765 [INFO][3544] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" iface="eth0" netns="" May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.765 [INFO][3544] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.765 [INFO][3544] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.801 [INFO][3551] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.801 [INFO][3551] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.801 [INFO][3551] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.813 [WARNING][3551] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.813 [INFO][3551] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.815 [INFO][3551] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:56.818981 containerd[1589]: 2025-05-17 00:24:56.817 [INFO][3544] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:56.818981 containerd[1589]: time="2025-05-17T00:24:56.818877408Z" level=info msg="TearDown network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\" successfully" May 17 00:24:56.818981 containerd[1589]: time="2025-05-17T00:24:56.818911613Z" level=info msg="StopPodSandbox for \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\" returns successfully" May 17 00:24:56.824689 containerd[1589]: time="2025-05-17T00:24:56.824267730Z" level=info msg="RemovePodSandbox for \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\"" May 17 00:24:56.824689 containerd[1589]: time="2025-05-17T00:24:56.824330962Z" level=info msg="Forcibly stopping sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\"" May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.871 [WARNING][3567] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-csi--node--driver--4bvml-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"77e2009c-40a3-47c4-b9d0-5b99ba0c6d66", ResourceVersion:"1352", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 23, 57, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"68bf44dd5", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"57f11f5551ec2e241b549d0c5c0bff70acb384f848befe742416c87c52fb7a40", Pod:"csi-node-driver-4bvml", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.120.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif3984c050ff", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.872 [INFO][3567] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.872 [INFO][3567] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" iface="eth0" netns="" May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.872 [INFO][3567] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.872 [INFO][3567] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.902 [INFO][3575] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.902 [INFO][3575] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.903 [INFO][3575] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.914 [WARNING][3575] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.914 [INFO][3575] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" HandleID="k8s-pod-network.473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" Workload="143.198.108.0-k8s-csi--node--driver--4bvml-eth0" May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.918 [INFO][3575] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:56.928254 containerd[1589]: 2025-05-17 00:24:56.923 [INFO][3567] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce" May 17 00:24:56.928254 containerd[1589]: time="2025-05-17T00:24:56.927865039Z" level=info msg="TearDown network for sandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\" successfully" May 17 00:24:56.950755 containerd[1589]: time="2025-05-17T00:24:56.950479326Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:56.950755 containerd[1589]: time="2025-05-17T00:24:56.950591008Z" level=info msg="RemovePodSandbox \"473dc42565f5ad9a66bafff8418b952fc311e9b3beec64611ccad5374c4d44ce\" returns successfully" May 17 00:24:56.952027 containerd[1589]: time="2025-05-17T00:24:56.951988224Z" level=info msg="StopPodSandbox for \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\"" May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.008 [WARNING][3590] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"e5d5f111-1f86-4cc1-9a80-2568f9fbd404", ResourceVersion:"1333", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44", Pod:"nginx-deployment-8587fbcb89-pkppr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliecb575ea1fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.009 [INFO][3590] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.009 [INFO][3590] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" iface="eth0" netns="" May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.009 [INFO][3590] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.009 [INFO][3590] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.034 [INFO][3598] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.034 [INFO][3598] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.034 [INFO][3598] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.042 [WARNING][3598] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.042 [INFO][3598] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.045 [INFO][3598] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:57.049113 containerd[1589]: 2025-05-17 00:24:57.047 [INFO][3590] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:57.050136 containerd[1589]: time="2025-05-17T00:24:57.049765043Z" level=info msg="TearDown network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\" successfully" May 17 00:24:57.050136 containerd[1589]: time="2025-05-17T00:24:57.049803272Z" level=info msg="StopPodSandbox for \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\" returns successfully" May 17 00:24:57.050628 containerd[1589]: time="2025-05-17T00:24:57.050487626Z" level=info msg="RemovePodSandbox for \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\"" May 17 00:24:57.050628 containerd[1589]: time="2025-05-17T00:24:57.050526543Z" level=info msg="Forcibly stopping sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\"" May 17 00:24:57.102268 systemd-networkd[1224]: cali5ec59c6bf6e: Gained IPv6LL May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.104 [WARNING][3612] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0", GenerateName:"nginx-deployment-8587fbcb89-", Namespace:"default", SelfLink:"", UID:"e5d5f111-1f86-4cc1-9a80-2568f9fbd404", ResourceVersion:"1333", Generation:0, CreationTimestamp:time.Date(2025, time.May, 17, 0, 24, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"8587fbcb89", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"143.198.108.0", ContainerID:"d4f13d490ce78b3e663f9a8108592a0dfc948993524433513d8cf31e8aa12a44", Pod:"nginx-deployment-8587fbcb89-pkppr", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.120.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"caliecb575ea1fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.105 [INFO][3612] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.105 [INFO][3612] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" iface="eth0" netns="" May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.105 [INFO][3612] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.105 [INFO][3612] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.138 [INFO][3619] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.139 [INFO][3619] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.139 [INFO][3619] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.148 [WARNING][3619] ipam/ipam_plugin.go 429: Asked to release address but it doesn't exist. Ignoring ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.148 [INFO][3619] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" HandleID="k8s-pod-network.09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" Workload="143.198.108.0-k8s-nginx--deployment--8587fbcb89--pkppr-eth0" May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.151 [INFO][3619] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. May 17 00:24:57.154960 containerd[1589]: 2025-05-17 00:24:57.153 [INFO][3612] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13" May 17 00:24:57.156162 containerd[1589]: time="2025-05-17T00:24:57.155010273Z" level=info msg="TearDown network for sandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\" successfully" May 17 00:24:57.186843 containerd[1589]: time="2025-05-17T00:24:57.186651662Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." May 17 00:24:57.186843 containerd[1589]: time="2025-05-17T00:24:57.186754499Z" level=info msg="RemovePodSandbox \"09ee446213192da61eb70b9b2149157118ce234d68e1fdcad3a733eb41505a13\" returns successfully" May 17 00:24:57.695340 kubelet[1929]: E0517 00:24:57.695266 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:58.696525 kubelet[1929]: E0517 00:24:58.696462 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:24:59.706823 kubelet[1929]: E0517 00:24:59.706759 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:25:00.708047 kubelet[1929]: E0517 00:25:00.707940 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:25:01.708903 kubelet[1929]: E0517 00:25:01.708773 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:25:02.709511 kubelet[1929]: E0517 00:25:02.709420 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" May 17 00:25:03.710232 kubelet[1929]: E0517 00:25:03.710141 1929 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"