Mar 17 17:48:58.099817 kernel: Linux version 6.6.83-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p1) 13.3.1 20240614, GNU ld (Gentoo 2.42 p6) 2.42.0) #1 SMP PREEMPT_DYNAMIC Mon Mar 17 16:07:40 -00 2025 Mar 17 17:48:58.099866 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:48:58.099888 kernel: BIOS-provided physical RAM map: Mar 17 17:48:58.099900 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable Mar 17 17:48:58.099912 kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved Mar 17 17:48:58.099924 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Mar 17 17:48:58.099939 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdafff] usable Mar 17 17:48:58.099954 kernel: BIOS-e820: [mem 0x000000007ffdb000-0x000000007fffffff] reserved Mar 17 17:48:58.099966 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Mar 17 17:48:58.099987 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Mar 17 17:48:58.099999 kernel: NX (Execute Disable) protection: active Mar 17 17:48:58.100041 kernel: APIC: Static calls initialized Mar 17 17:48:58.100053 kernel: SMBIOS 2.8 present. Mar 17 17:48:58.100067 kernel: DMI: DigitalOcean Droplet/Droplet, BIOS 20171212 12/12/2017 Mar 17 17:48:58.100082 kernel: Hypervisor detected: KVM Mar 17 17:48:58.100103 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Mar 17 17:48:58.100121 kernel: kvm-clock: using sched offset of 3710356923 cycles Mar 17 17:48:58.100136 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Mar 17 17:48:58.100150 kernel: tsc: Detected 2494.146 MHz processor Mar 17 17:48:58.100166 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Mar 17 17:48:58.100181 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Mar 17 17:48:58.100195 kernel: last_pfn = 0x7ffdb max_arch_pfn = 0x400000000 Mar 17 17:48:58.100210 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Mar 17 17:48:58.100225 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Mar 17 17:48:58.100247 kernel: ACPI: Early table checksum verification disabled Mar 17 17:48:58.100261 kernel: ACPI: RSDP 0x00000000000F5950 000014 (v00 BOCHS ) Mar 17 17:48:58.100274 kernel: ACPI: RSDT 0x000000007FFE1986 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:48:58.100289 kernel: ACPI: FACP 0x000000007FFE176A 000074 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:48:58.100304 kernel: ACPI: DSDT 0x000000007FFE0040 00172A (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:48:58.100318 kernel: ACPI: FACS 0x000000007FFE0000 000040 Mar 17 17:48:58.100333 kernel: ACPI: APIC 0x000000007FFE17DE 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:48:58.100347 kernel: ACPI: HPET 0x000000007FFE185E 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:48:58.100361 kernel: ACPI: SRAT 0x000000007FFE1896 0000C8 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:48:58.100378 kernel: ACPI: WAET 0x000000007FFE195E 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Mar 17 17:48:58.100389 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe176a-0x7ffe17dd] Mar 17 17:48:58.100400 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe1769] Mar 17 17:48:58.100412 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Mar 17 17:48:58.100423 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe17de-0x7ffe185d] Mar 17 17:48:58.100435 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe185e-0x7ffe1895] Mar 17 17:48:58.100446 kernel: ACPI: Reserving SRAT table memory at [mem 0x7ffe1896-0x7ffe195d] Mar 17 17:48:58.100470 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe195e-0x7ffe1985] Mar 17 17:48:58.100483 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Mar 17 17:48:58.100503 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Mar 17 17:48:58.100516 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Mar 17 17:48:58.100532 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0x7fffffff] Mar 17 17:48:58.100546 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0x7ffdafff] -> [mem 0x00000000-0x7ffdafff] Mar 17 17:48:58.100560 kernel: NODE_DATA(0) allocated [mem 0x7ffd5000-0x7ffdafff] Mar 17 17:48:58.100582 kernel: Zone ranges: Mar 17 17:48:58.100610 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Mar 17 17:48:58.100626 kernel: DMA32 [mem 0x0000000001000000-0x000000007ffdafff] Mar 17 17:48:58.100641 kernel: Normal empty Mar 17 17:48:58.100656 kernel: Movable zone start for each node Mar 17 17:48:58.100672 kernel: Early memory node ranges Mar 17 17:48:58.100688 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Mar 17 17:48:58.100704 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdafff] Mar 17 17:48:58.100720 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000007ffdafff] Mar 17 17:48:58.100741 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Mar 17 17:48:58.100756 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Mar 17 17:48:58.100775 kernel: On node 0, zone DMA32: 37 pages in unavailable ranges Mar 17 17:48:58.100791 kernel: ACPI: PM-Timer IO Port: 0x608 Mar 17 17:48:58.100806 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Mar 17 17:48:58.100820 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Mar 17 17:48:58.100834 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Mar 17 17:48:58.100848 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Mar 17 17:48:58.100863 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Mar 17 17:48:58.100882 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Mar 17 17:48:58.100898 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Mar 17 17:48:58.100913 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Mar 17 17:48:58.100930 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Mar 17 17:48:58.100946 kernel: TSC deadline timer available Mar 17 17:48:58.100962 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Mar 17 17:48:58.100978 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Mar 17 17:48:58.100993 kernel: [mem 0x80000000-0xfeffbfff] available for PCI devices Mar 17 17:48:58.104088 kernel: Booting paravirtualized kernel on KVM Mar 17 17:48:58.104145 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Mar 17 17:48:58.104171 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Mar 17 17:48:58.104185 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 Mar 17 17:48:58.104199 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 Mar 17 17:48:58.104211 kernel: pcpu-alloc: [0] 0 1 Mar 17 17:48:58.104227 kernel: kvm-guest: PV spinlocks disabled, no host support Mar 17 17:48:58.104242 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:48:58.104256 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Mar 17 17:48:58.104273 kernel: random: crng init done Mar 17 17:48:58.104286 kernel: Dentry cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Mar 17 17:48:58.104302 kernel: Inode-cache hash table entries: 131072 (order: 8, 1048576 bytes, linear) Mar 17 17:48:58.104315 kernel: Fallback order for Node 0: 0 Mar 17 17:48:58.104327 kernel: Built 1 zonelists, mobility grouping on. Total pages: 515803 Mar 17 17:48:58.104341 kernel: Policy zone: DMA32 Mar 17 17:48:58.104353 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Mar 17 17:48:58.104366 kernel: Memory: 1971204K/2096612K available (12288K kernel code, 2303K rwdata, 22744K rodata, 42992K init, 2196K bss, 125148K reserved, 0K cma-reserved) Mar 17 17:48:58.104378 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Mar 17 17:48:58.104396 kernel: Kernel/User page tables isolation: enabled Mar 17 17:48:58.104410 kernel: ftrace: allocating 37938 entries in 149 pages Mar 17 17:48:58.104422 kernel: ftrace: allocated 149 pages with 4 groups Mar 17 17:48:58.104435 kernel: Dynamic Preempt: voluntary Mar 17 17:48:58.104448 kernel: rcu: Preemptible hierarchical RCU implementation. Mar 17 17:48:58.104463 kernel: rcu: RCU event tracing is enabled. Mar 17 17:48:58.104477 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Mar 17 17:48:58.104490 kernel: Trampoline variant of Tasks RCU enabled. Mar 17 17:48:58.104504 kernel: Rude variant of Tasks RCU enabled. Mar 17 17:48:58.104527 kernel: Tracing variant of Tasks RCU enabled. Mar 17 17:48:58.104543 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Mar 17 17:48:58.104556 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Mar 17 17:48:58.104569 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Mar 17 17:48:58.104591 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Mar 17 17:48:58.104603 kernel: Console: colour VGA+ 80x25 Mar 17 17:48:58.104613 kernel: printk: console [tty0] enabled Mar 17 17:48:58.104621 kernel: printk: console [ttyS0] enabled Mar 17 17:48:58.104630 kernel: ACPI: Core revision 20230628 Mar 17 17:48:58.104640 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Mar 17 17:48:58.104654 kernel: APIC: Switch to symmetric I/O mode setup Mar 17 17:48:58.104663 kernel: x2apic enabled Mar 17 17:48:58.104677 kernel: APIC: Switched APIC routing to: physical x2apic Mar 17 17:48:58.104690 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Mar 17 17:48:58.104703 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Mar 17 17:48:58.104715 kernel: Calibrating delay loop (skipped) preset value.. 4988.29 BogoMIPS (lpj=2494146) Mar 17 17:48:58.104727 kernel: Last level iTLB entries: 4KB 0, 2MB 0, 4MB 0 Mar 17 17:48:58.104739 kernel: Last level dTLB entries: 4KB 0, 2MB 0, 4MB 0, 1GB 0 Mar 17 17:48:58.104773 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Mar 17 17:48:58.104787 kernel: Spectre V2 : Mitigation: Retpolines Mar 17 17:48:58.104801 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Mar 17 17:48:58.104823 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Mar 17 17:48:58.104838 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Mar 17 17:48:58.104854 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Mar 17 17:48:58.104864 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Mar 17 17:48:58.104874 kernel: MDS: Mitigation: Clear CPU buffers Mar 17 17:48:58.104883 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Mar 17 17:48:58.104902 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Mar 17 17:48:58.104911 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Mar 17 17:48:58.104921 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Mar 17 17:48:58.104931 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Mar 17 17:48:58.104940 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Mar 17 17:48:58.104949 kernel: Freeing SMP alternatives memory: 32K Mar 17 17:48:58.104958 kernel: pid_max: default: 32768 minimum: 301 Mar 17 17:48:58.104968 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Mar 17 17:48:58.104981 kernel: landlock: Up and running. Mar 17 17:48:58.104990 kernel: SELinux: Initializing. Mar 17 17:48:58.104999 kernel: Mount-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:48:58.105023 kernel: Mountpoint-cache hash table entries: 4096 (order: 3, 32768 bytes, linear) Mar 17 17:48:58.105033 kernel: smpboot: CPU0: Intel DO-Regular (family: 0x6, model: 0x4f, stepping: 0x1) Mar 17 17:48:58.105045 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:48:58.105059 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:48:58.105073 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Mar 17 17:48:58.105093 kernel: Performance Events: unsupported p6 CPU model 79 no PMU driver, software events only. Mar 17 17:48:58.105106 kernel: signal: max sigframe size: 1776 Mar 17 17:48:58.105119 kernel: rcu: Hierarchical SRCU implementation. Mar 17 17:48:58.105133 kernel: rcu: Max phase no-delay instances is 400. Mar 17 17:48:58.105148 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Mar 17 17:48:58.105160 kernel: smp: Bringing up secondary CPUs ... Mar 17 17:48:58.105174 kernel: smpboot: x86: Booting SMP configuration: Mar 17 17:48:58.105188 kernel: .... node #0, CPUs: #1 Mar 17 17:48:58.105206 kernel: smp: Brought up 1 node, 2 CPUs Mar 17 17:48:58.105220 kernel: smpboot: Max logical packages: 1 Mar 17 17:48:58.105240 kernel: smpboot: Total of 2 processors activated (9976.58 BogoMIPS) Mar 17 17:48:58.105253 kernel: devtmpfs: initialized Mar 17 17:48:58.105266 kernel: x86/mm: Memory block size: 128MB Mar 17 17:48:58.105281 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Mar 17 17:48:58.105293 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Mar 17 17:48:58.105306 kernel: pinctrl core: initialized pinctrl subsystem Mar 17 17:48:58.105319 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Mar 17 17:48:58.105334 kernel: audit: initializing netlink subsys (disabled) Mar 17 17:48:58.105346 kernel: audit: type=2000 audit(1742233736.939:1): state=initialized audit_enabled=0 res=1 Mar 17 17:48:58.105365 kernel: thermal_sys: Registered thermal governor 'step_wise' Mar 17 17:48:58.105379 kernel: thermal_sys: Registered thermal governor 'user_space' Mar 17 17:48:58.105392 kernel: cpuidle: using governor menu Mar 17 17:48:58.105405 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Mar 17 17:48:58.105418 kernel: dca service started, version 1.12.1 Mar 17 17:48:58.105430 kernel: PCI: Using configuration type 1 for base access Mar 17 17:48:58.105448 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Mar 17 17:48:58.105460 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Mar 17 17:48:58.105473 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Mar 17 17:48:58.105491 kernel: ACPI: Added _OSI(Module Device) Mar 17 17:48:58.105504 kernel: ACPI: Added _OSI(Processor Device) Mar 17 17:48:58.105517 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Mar 17 17:48:58.105530 kernel: ACPI: Added _OSI(Processor Aggregator Device) Mar 17 17:48:58.105544 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Mar 17 17:48:58.105557 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC Mar 17 17:48:58.105570 kernel: ACPI: Interpreter enabled Mar 17 17:48:58.105587 kernel: ACPI: PM: (supports S0 S5) Mar 17 17:48:58.105599 kernel: ACPI: Using IOAPIC for interrupt routing Mar 17 17:48:58.105618 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Mar 17 17:48:58.105631 kernel: PCI: Using E820 reservations for host bridge windows Mar 17 17:48:58.105645 kernel: ACPI: Enabled 2 GPEs in block 00 to 0F Mar 17 17:48:58.105657 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Mar 17 17:48:58.108202 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Mar 17 17:48:58.108528 kernel: acpi PNP0A03:00: _OSC: not requesting OS control; OS requires [ExtendedConfig ASPM ClockPM MSI] Mar 17 17:48:58.108698 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended configuration space under this bridge Mar 17 17:48:58.108733 kernel: acpiphp: Slot [3] registered Mar 17 17:48:58.108749 kernel: acpiphp: Slot [4] registered Mar 17 17:48:58.108764 kernel: acpiphp: Slot [5] registered Mar 17 17:48:58.108779 kernel: acpiphp: Slot [6] registered Mar 17 17:48:58.108794 kernel: acpiphp: Slot [7] registered Mar 17 17:48:58.108813 kernel: acpiphp: Slot [8] registered Mar 17 17:48:58.108829 kernel: acpiphp: Slot [9] registered Mar 17 17:48:58.108845 kernel: acpiphp: Slot [10] registered Mar 17 17:48:58.108860 kernel: acpiphp: Slot [11] registered Mar 17 17:48:58.108882 kernel: acpiphp: Slot [12] registered Mar 17 17:48:58.108898 kernel: acpiphp: Slot [13] registered Mar 17 17:48:58.108913 kernel: acpiphp: Slot [14] registered Mar 17 17:48:58.108928 kernel: acpiphp: Slot [15] registered Mar 17 17:48:58.108944 kernel: acpiphp: Slot [16] registered Mar 17 17:48:58.108960 kernel: acpiphp: Slot [17] registered Mar 17 17:48:58.108975 kernel: acpiphp: Slot [18] registered Mar 17 17:48:58.108991 kernel: acpiphp: Slot [19] registered Mar 17 17:48:58.109024 kernel: acpiphp: Slot [20] registered Mar 17 17:48:58.109041 kernel: acpiphp: Slot [21] registered Mar 17 17:48:58.109063 kernel: acpiphp: Slot [22] registered Mar 17 17:48:58.109100 kernel: acpiphp: Slot [23] registered Mar 17 17:48:58.109116 kernel: acpiphp: Slot [24] registered Mar 17 17:48:58.109132 kernel: acpiphp: Slot [25] registered Mar 17 17:48:58.109150 kernel: acpiphp: Slot [26] registered Mar 17 17:48:58.109165 kernel: acpiphp: Slot [27] registered Mar 17 17:48:58.109180 kernel: acpiphp: Slot [28] registered Mar 17 17:48:58.109196 kernel: acpiphp: Slot [29] registered Mar 17 17:48:58.109216 kernel: acpiphp: Slot [30] registered Mar 17 17:48:58.109236 kernel: acpiphp: Slot [31] registered Mar 17 17:48:58.109252 kernel: PCI host bridge to bus 0000:00 Mar 17 17:48:58.109511 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Mar 17 17:48:58.109689 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Mar 17 17:48:58.109836 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Mar 17 17:48:58.109995 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xfebfffff window] Mar 17 17:48:58.112338 kernel: pci_bus 0000:00: root bus resource [mem 0x100000000-0x17fffffff window] Mar 17 17:48:58.112516 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Mar 17 17:48:58.112777 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Mar 17 17:48:58.112982 kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 Mar 17 17:48:58.113243 kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 Mar 17 17:48:58.113416 kernel: pci 0000:00:01.1: reg 0x20: [io 0xc1e0-0xc1ef] Mar 17 17:48:58.113600 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x10: [io 0x01f0-0x01f7] Mar 17 17:48:58.113770 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x14: [io 0x03f6] Mar 17 17:48:58.113951 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x18: [io 0x0170-0x0177] Mar 17 17:48:58.116241 kernel: pci 0000:00:01.1: legacy IDE quirk: reg 0x1c: [io 0x0376] Mar 17 17:48:58.116482 kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 Mar 17 17:48:58.116688 kernel: pci 0000:00:01.2: reg 0x20: [io 0xc180-0xc19f] Mar 17 17:48:58.116904 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Mar 17 17:48:58.117187 kernel: pci 0000:00:01.3: quirk: [io 0x0600-0x063f] claimed by PIIX4 ACPI Mar 17 17:48:58.117370 kernel: pci 0000:00:01.3: quirk: [io 0x0700-0x070f] claimed by PIIX4 SMB Mar 17 17:48:58.117492 kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 Mar 17 17:48:58.117591 kernel: pci 0000:00:02.0: reg 0x10: [mem 0xfe000000-0xfe7fffff pref] Mar 17 17:48:58.117693 kernel: pci 0000:00:02.0: reg 0x18: [mem 0xfe800000-0xfe803fff 64bit pref] Mar 17 17:48:58.117790 kernel: pci 0000:00:02.0: reg 0x20: [mem 0xfebf0000-0xfebf0fff] Mar 17 17:48:58.117888 kernel: pci 0000:00:02.0: reg 0x30: [mem 0xfebe0000-0xfebeffff pref] Mar 17 17:48:58.118004 kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Mar 17 17:48:58.122953 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:48:58.123117 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc1a0-0xc1bf] Mar 17 17:48:58.123242 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xfebf1000-0xfebf1fff] Mar 17 17:48:58.123371 kernel: pci 0000:00:03.0: reg 0x20: [mem 0xfe804000-0xfe807fff 64bit pref] Mar 17 17:48:58.123546 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Mar 17 17:48:58.123735 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc1c0-0xc1df] Mar 17 17:48:58.123907 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xfebf2000-0xfebf2fff] Mar 17 17:48:58.124109 kernel: pci 0000:00:04.0: reg 0x20: [mem 0xfe808000-0xfe80bfff 64bit pref] Mar 17 17:48:58.124340 kernel: pci 0000:00:05.0: [1af4:1004] type 00 class 0x010000 Mar 17 17:48:58.124514 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc100-0xc13f] Mar 17 17:48:58.124695 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xfebf3000-0xfebf3fff] Mar 17 17:48:58.124845 kernel: pci 0000:00:05.0: reg 0x20: [mem 0xfe80c000-0xfe80ffff 64bit pref] Mar 17 17:48:58.125088 kernel: pci 0000:00:06.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:48:58.125286 kernel: pci 0000:00:06.0: reg 0x10: [io 0xc000-0xc07f] Mar 17 17:48:58.125463 kernel: pci 0000:00:06.0: reg 0x14: [mem 0xfebf4000-0xfebf4fff] Mar 17 17:48:58.125639 kernel: pci 0000:00:06.0: reg 0x20: [mem 0xfe810000-0xfe813fff 64bit pref] Mar 17 17:48:58.125861 kernel: pci 0000:00:07.0: [1af4:1001] type 00 class 0x010000 Mar 17 17:48:58.126107 kernel: pci 0000:00:07.0: reg 0x10: [io 0xc080-0xc0ff] Mar 17 17:48:58.126314 kernel: pci 0000:00:07.0: reg 0x14: [mem 0xfebf5000-0xfebf5fff] Mar 17 17:48:58.126561 kernel: pci 0000:00:07.0: reg 0x20: [mem 0xfe814000-0xfe817fff 64bit pref] Mar 17 17:48:58.126756 kernel: pci 0000:00:08.0: [1af4:1002] type 00 class 0x00ff00 Mar 17 17:48:58.126958 kernel: pci 0000:00:08.0: reg 0x10: [io 0xc140-0xc17f] Mar 17 17:48:58.128378 kernel: pci 0000:00:08.0: reg 0x20: [mem 0xfe818000-0xfe81bfff 64bit pref] Mar 17 17:48:58.128426 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Mar 17 17:48:58.128443 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Mar 17 17:48:58.128459 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Mar 17 17:48:58.128474 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Mar 17 17:48:58.128511 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Mar 17 17:48:58.128525 kernel: iommu: Default domain type: Translated Mar 17 17:48:58.128540 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Mar 17 17:48:58.128555 kernel: PCI: Using ACPI for IRQ routing Mar 17 17:48:58.128568 kernel: PCI: pci_cache_line_size set to 64 bytes Mar 17 17:48:58.128583 kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff] Mar 17 17:48:58.128600 kernel: e820: reserve RAM buffer [mem 0x7ffdb000-0x7fffffff] Mar 17 17:48:58.128785 kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device Mar 17 17:48:58.128955 kernel: pci 0000:00:02.0: vgaarb: bridge control possible Mar 17 17:48:58.129162 kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Mar 17 17:48:58.129188 kernel: vgaarb: loaded Mar 17 17:48:58.129203 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Mar 17 17:48:58.129218 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Mar 17 17:48:58.129234 kernel: clocksource: Switched to clocksource kvm-clock Mar 17 17:48:58.129278 kernel: VFS: Disk quotas dquot_6.6.0 Mar 17 17:48:58.129296 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Mar 17 17:48:58.129310 kernel: pnp: PnP ACPI init Mar 17 17:48:58.129324 kernel: pnp: PnP ACPI: found 4 devices Mar 17 17:48:58.129349 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Mar 17 17:48:58.129362 kernel: NET: Registered PF_INET protocol family Mar 17 17:48:58.129372 kernel: IP idents hash table entries: 32768 (order: 6, 262144 bytes, linear) Mar 17 17:48:58.129382 kernel: tcp_listen_portaddr_hash hash table entries: 1024 (order: 2, 16384 bytes, linear) Mar 17 17:48:58.129392 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Mar 17 17:48:58.129402 kernel: TCP established hash table entries: 16384 (order: 5, 131072 bytes, linear) Mar 17 17:48:58.129412 kernel: TCP bind hash table entries: 16384 (order: 7, 524288 bytes, linear) Mar 17 17:48:58.129422 kernel: TCP: Hash tables configured (established 16384 bind 16384) Mar 17 17:48:58.129432 kernel: UDP hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:48:58.129445 kernel: UDP-Lite hash table entries: 1024 (order: 3, 32768 bytes, linear) Mar 17 17:48:58.129455 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Mar 17 17:48:58.129464 kernel: NET: Registered PF_XDP protocol family Mar 17 17:48:58.129652 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Mar 17 17:48:58.129798 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Mar 17 17:48:58.129931 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Mar 17 17:48:58.132341 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xfebfffff window] Mar 17 17:48:58.132553 kernel: pci_bus 0000:00: resource 8 [mem 0x100000000-0x17fffffff window] Mar 17 17:48:58.132761 kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release Mar 17 17:48:58.132928 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Mar 17 17:48:58.132951 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11 Mar 17 17:48:58.133179 kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x7b0 took 45301 usecs Mar 17 17:48:58.133203 kernel: PCI: CLS 0 bytes, default 64 Mar 17 17:48:58.133218 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Mar 17 17:48:58.133234 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x23f39fcb9af, max_idle_ns: 440795211412 ns Mar 17 17:48:58.133248 kernel: Initialise system trusted keyrings Mar 17 17:48:58.133277 kernel: workingset: timestamp_bits=39 max_order=19 bucket_order=0 Mar 17 17:48:58.133291 kernel: Key type asymmetric registered Mar 17 17:48:58.133306 kernel: Asymmetric key parser 'x509' registered Mar 17 17:48:58.133320 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) Mar 17 17:48:58.133334 kernel: io scheduler mq-deadline registered Mar 17 17:48:58.133348 kernel: io scheduler kyber registered Mar 17 17:48:58.133362 kernel: io scheduler bfq registered Mar 17 17:48:58.133377 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Mar 17 17:48:58.133393 kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10 Mar 17 17:48:58.133415 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Mar 17 17:48:58.133430 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Mar 17 17:48:58.133444 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Mar 17 17:48:58.133459 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Mar 17 17:48:58.133474 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Mar 17 17:48:58.133492 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Mar 17 17:48:58.133508 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Mar 17 17:48:58.133803 kernel: rtc_cmos 00:03: RTC can wake from S4 Mar 17 17:48:58.133950 kernel: rtc_cmos 00:03: registered as rtc0 Mar 17 17:48:58.136278 kernel: rtc_cmos 00:03: setting system clock to 2025-03-17T17:48:57 UTC (1742233737) Mar 17 17:48:58.136471 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram Mar 17 17:48:58.136493 kernel: intel_pstate: CPU model not supported Mar 17 17:48:58.136509 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Mar 17 17:48:58.136525 kernel: NET: Registered PF_INET6 protocol family Mar 17 17:48:58.136541 kernel: Segment Routing with IPv6 Mar 17 17:48:58.136555 kernel: In-situ OAM (IOAM) with IPv6 Mar 17 17:48:58.136574 kernel: NET: Registered PF_PACKET protocol family Mar 17 17:48:58.136610 kernel: Key type dns_resolver registered Mar 17 17:48:58.136625 kernel: IPI shorthand broadcast: enabled Mar 17 17:48:58.136639 kernel: sched_clock: Marking stable (1211010227, 108136027)->(1442886008, -123739754) Mar 17 17:48:58.136653 kernel: registered taskstats version 1 Mar 17 17:48:58.136667 kernel: Loading compiled-in X.509 certificates Mar 17 17:48:58.136686 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.83-flatcar: 608fb88224bc0ea76afefc598557abb0413f36c0' Mar 17 17:48:58.136699 kernel: Key type .fscrypt registered Mar 17 17:48:58.136715 kernel: Key type fscrypt-provisioning registered Mar 17 17:48:58.136731 kernel: ima: No TPM chip found, activating TPM-bypass! Mar 17 17:48:58.136752 kernel: ima: Allocated hash algorithm: sha1 Mar 17 17:48:58.136768 kernel: ima: No architecture policies found Mar 17 17:48:58.136781 kernel: clk: Disabling unused clocks Mar 17 17:48:58.136793 kernel: Freeing unused kernel image (initmem) memory: 42992K Mar 17 17:48:58.136806 kernel: Write protecting the kernel read-only data: 36864k Mar 17 17:48:58.136864 kernel: Freeing unused kernel image (rodata/data gap) memory: 1832K Mar 17 17:48:58.136884 kernel: Run /init as init process Mar 17 17:48:58.136899 kernel: with arguments: Mar 17 17:48:58.136916 kernel: /init Mar 17 17:48:58.136935 kernel: with environment: Mar 17 17:48:58.136949 kernel: HOME=/ Mar 17 17:48:58.136964 kernel: TERM=linux Mar 17 17:48:58.136980 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Mar 17 17:48:58.137098 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:48:58.137126 systemd[1]: Detected virtualization kvm. Mar 17 17:48:58.137143 systemd[1]: Detected architecture x86-64. Mar 17 17:48:58.137166 systemd[1]: Running in initrd. Mar 17 17:48:58.137179 systemd[1]: No hostname configured, using default hostname. Mar 17 17:48:58.137194 systemd[1]: Hostname set to . Mar 17 17:48:58.137210 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:48:58.137225 systemd[1]: Queued start job for default target initrd.target. Mar 17 17:48:58.137239 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:48:58.137253 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:48:58.137274 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Mar 17 17:48:58.137298 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:48:58.137313 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Mar 17 17:48:58.137329 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Mar 17 17:48:58.137349 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Mar 17 17:48:58.137365 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Mar 17 17:48:58.137380 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:48:58.137396 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:48:58.137417 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:48:58.137433 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:48:58.137448 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:48:58.137470 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:48:58.137485 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:48:58.137499 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:48:58.137521 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:48:58.137538 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:48:58.137555 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:48:58.137572 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:48:58.137589 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:48:58.137606 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:48:58.137623 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Mar 17 17:48:58.137640 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:48:58.137668 systemd[1]: Finished network-cleanup.service - Network Cleanup. Mar 17 17:48:58.137685 systemd[1]: Starting systemd-fsck-usr.service... Mar 17 17:48:58.137703 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:48:58.137721 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:48:58.137740 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:48:58.137762 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Mar 17 17:48:58.137849 systemd-journald[182]: Collecting audit messages is disabled. Mar 17 17:48:58.137897 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:48:58.137914 systemd[1]: Finished systemd-fsck-usr.service. Mar 17 17:48:58.137932 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:48:58.137958 systemd-journald[182]: Journal started Mar 17 17:48:58.137995 systemd-journald[182]: Runtime Journal (/run/log/journal/a69c2b688539476da286bb5fd4d98a49) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:48:58.138712 systemd-modules-load[183]: Inserted module 'overlay' Mar 17 17:48:58.169321 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:48:58.170736 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:48:58.185395 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Mar 17 17:48:58.171574 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:48:58.191305 kernel: Bridge firewalling registered Mar 17 17:48:58.184633 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:48:58.187140 systemd-modules-load[183]: Inserted module 'br_netfilter' Mar 17 17:48:58.197613 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:48:58.201278 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:48:58.205223 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:48:58.220503 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:48:58.225739 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:48:58.249131 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:48:58.259462 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Mar 17 17:48:58.261529 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:48:58.264163 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:48:58.277483 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:48:58.284058 dracut-cmdline[215]: dracut-dracut-053 Mar 17 17:48:58.288068 dracut-cmdline[215]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=digitalocean verity.usrhash=d4b838cd9a6f58e8c4a6b615c32b0b28ee0df1660e34033a8fbd0429c6de5fd0 Mar 17 17:48:58.324696 systemd-resolved[220]: Positive Trust Anchors: Mar 17 17:48:58.325652 systemd-resolved[220]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:48:58.326423 systemd-resolved[220]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:48:58.332640 systemd-resolved[220]: Defaulting to hostname 'linux'. Mar 17 17:48:58.334481 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:48:58.335157 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:48:58.416142 kernel: SCSI subsystem initialized Mar 17 17:48:58.429093 kernel: Loading iSCSI transport class v2.0-870. Mar 17 17:48:58.445063 kernel: iscsi: registered transport (tcp) Mar 17 17:48:58.473068 kernel: iscsi: registered transport (qla4xxx) Mar 17 17:48:58.473198 kernel: QLogic iSCSI HBA Driver Mar 17 17:48:58.541785 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Mar 17 17:48:58.548484 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Mar 17 17:48:58.593130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Mar 17 17:48:58.593270 kernel: device-mapper: uevent: version 1.0.3 Mar 17 17:48:58.594067 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Mar 17 17:48:58.655121 kernel: raid6: avx2x4 gen() 14090 MB/s Mar 17 17:48:58.672696 kernel: raid6: avx2x2 gen() 12197 MB/s Mar 17 17:48:58.690871 kernel: raid6: avx2x1 gen() 7401 MB/s Mar 17 17:48:58.691087 kernel: raid6: using algorithm avx2x4 gen() 14090 MB/s Mar 17 17:48:58.709095 kernel: raid6: .... xor() 6728 MB/s, rmw enabled Mar 17 17:48:58.709236 kernel: raid6: using avx2x2 recovery algorithm Mar 17 17:48:58.743086 kernel: xor: automatically using best checksumming function avx Mar 17 17:48:58.972055 kernel: Btrfs loaded, zoned=no, fsverity=no Mar 17 17:48:58.990770 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:48:58.997315 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:48:59.034106 systemd-udevd[401]: Using default interface naming scheme 'v255'. Mar 17 17:48:59.042822 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:48:59.052883 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Mar 17 17:48:59.081854 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Mar 17 17:48:59.134584 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:48:59.146434 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:48:59.230377 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:48:59.239691 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Mar 17 17:48:59.281876 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Mar 17 17:48:59.289150 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:48:59.290453 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:48:59.292135 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:48:59.303639 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Mar 17 17:48:59.337778 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:48:59.390060 kernel: virtio_blk virtio4: 1/0/0 default/read/poll queues Mar 17 17:48:59.530360 kernel: scsi host0: Virtio SCSI HBA Mar 17 17:48:59.530693 kernel: cryptd: max_cpu_qlen set to 1000 Mar 17 17:48:59.530721 kernel: virtio_blk virtio4: [vda] 125829120 512-byte logical blocks (64.4 GB/60.0 GiB) Mar 17 17:48:59.530924 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Mar 17 17:48:59.530948 kernel: AVX2 version of gcm_enc/dec engaged. Mar 17 17:48:59.530968 kernel: GPT:9289727 != 125829119 Mar 17 17:48:59.530989 kernel: GPT:Alternate GPT header not at the end of the disk. Mar 17 17:48:59.533760 kernel: GPT:9289727 != 125829119 Mar 17 17:48:59.533835 kernel: AES CTR mode by8 optimization enabled Mar 17 17:48:59.533860 kernel: GPT: Use GNU Parted to correct GPT errors. Mar 17 17:48:59.533883 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:48:59.533904 kernel: ACPI: bus type USB registered Mar 17 17:48:59.533926 kernel: usbcore: registered new interface driver usbfs Mar 17 17:48:59.533951 kernel: usbcore: registered new interface driver hub Mar 17 17:48:59.533973 kernel: usbcore: registered new device driver usb Mar 17 17:48:59.533995 kernel: virtio_blk virtio5: 1/0/0 default/read/poll queues Mar 17 17:48:59.576257 kernel: virtio_blk virtio5: [vdb] 932 512-byte logical blocks (477 kB/466 KiB) Mar 17 17:48:59.577322 kernel: libata version 3.00 loaded. Mar 17 17:48:59.510777 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:48:59.511096 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:48:59.513123 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:48:59.514109 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:48:59.514462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:48:59.515257 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:48:59.527679 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:48:59.599162 kernel: ata_piix 0000:00:01.1: version 2.13 Mar 17 17:48:59.608305 kernel: scsi host1: ata_piix Mar 17 17:48:59.608978 kernel: scsi host2: ata_piix Mar 17 17:48:59.609325 kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc1e0 irq 14 Mar 17 17:48:59.609353 kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc1e8 irq 15 Mar 17 17:48:59.631042 kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller Mar 17 17:48:59.633267 kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1 Mar 17 17:48:59.633575 kernel: uhci_hcd 0000:00:01.2: detected 2 ports Mar 17 17:48:59.633814 kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c180 Mar 17 17:48:59.634070 kernel: hub 1-0:1.0: USB hub found Mar 17 17:48:59.634295 kernel: hub 1-0:1.0: 2 ports detected Mar 17 17:48:59.673438 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Mar 17 17:48:59.686778 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/vda6 scanned by (udev-worker) (446) Mar 17 17:48:59.684903 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:48:59.693047 kernel: BTRFS: device fsid 2b8ebefd-e897-48f6-96d5-0893fbb7c64a devid 1 transid 40 /dev/vda3 scanned by (udev-worker) (444) Mar 17 17:48:59.702701 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Mar 17 17:48:59.721265 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:48:59.727288 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Mar 17 17:48:59.728073 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132. Mar 17 17:48:59.735107 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Mar 17 17:48:59.739819 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Mar 17 17:48:59.750802 disk-uuid[538]: Primary Header is updated. Mar 17 17:48:59.750802 disk-uuid[538]: Secondary Entries is updated. Mar 17 17:48:59.750802 disk-uuid[538]: Secondary Header is updated. Mar 17 17:48:59.763132 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:48:59.793735 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:48:59.797103 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:00.784891 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Mar 17 17:49:00.785038 disk-uuid[539]: The operation has completed successfully. Mar 17 17:49:00.861725 systemd[1]: disk-uuid.service: Deactivated successfully. Mar 17 17:49:00.861975 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Mar 17 17:49:00.889507 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Mar 17 17:49:00.896109 sh[558]: Success Mar 17 17:49:00.920167 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Mar 17 17:49:01.008208 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Mar 17 17:49:01.029546 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Mar 17 17:49:01.037900 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Mar 17 17:49:01.059206 kernel: BTRFS info (device dm-0): first mount of filesystem 2b8ebefd-e897-48f6-96d5-0893fbb7c64a Mar 17 17:49:01.059348 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:01.059374 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Mar 17 17:49:01.060820 kernel: BTRFS info (device dm-0): disabling log replay at mount time Mar 17 17:49:01.062088 kernel: BTRFS info (device dm-0): using free space tree Mar 17 17:49:01.074552 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Mar 17 17:49:01.076236 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Mar 17 17:49:01.081421 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Mar 17 17:49:01.093535 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Mar 17 17:49:01.115139 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:01.115240 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:01.115281 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:01.119056 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:01.136706 systemd[1]: mnt-oem.mount: Deactivated successfully. Mar 17 17:49:01.138102 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:01.145723 systemd[1]: Finished ignition-setup.service - Ignition (setup). Mar 17 17:49:01.156442 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Mar 17 17:49:01.279751 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:49:01.285747 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:49:01.328332 ignition[653]: Ignition 2.20.0 Mar 17 17:49:01.329384 ignition[653]: Stage: fetch-offline Mar 17 17:49:01.329952 ignition[653]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:01.329969 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:49:01.330833 ignition[653]: parsed url from cmdline: "" Mar 17 17:49:01.330840 ignition[653]: no config URL provided Mar 17 17:49:01.330851 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:49:01.330870 ignition[653]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:49:01.330880 ignition[653]: failed to fetch config: resource requires networking Mar 17 17:49:01.331265 ignition[653]: Ignition finished successfully Mar 17 17:49:01.336615 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:49:01.341745 systemd-networkd[744]: lo: Link UP Mar 17 17:49:01.341765 systemd-networkd[744]: lo: Gained carrier Mar 17 17:49:01.345521 systemd-networkd[744]: Enumeration completed Mar 17 17:49:01.346142 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:49:01.346147 systemd-networkd[744]: eth0: Configuring with /usr/lib/systemd/network/yy-digitalocean.network. Mar 17 17:49:01.348329 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:49:01.348365 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:01.348371 systemd-networkd[744]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Mar 17 17:49:01.349426 systemd[1]: Reached target network.target - Network. Mar 17 17:49:01.349953 systemd-networkd[744]: eth0: Link UP Mar 17 17:49:01.349961 systemd-networkd[744]: eth0: Gained carrier Mar 17 17:49:01.349976 systemd-networkd[744]: eth0: found matching network '/usr/lib/systemd/network/yy-digitalocean.network', based on potentially unpredictable interface name. Mar 17 17:49:01.354793 systemd-networkd[744]: eth1: Link UP Mar 17 17:49:01.354799 systemd-networkd[744]: eth1: Gained carrier Mar 17 17:49:01.354820 systemd-networkd[744]: eth1: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Mar 17 17:49:01.357541 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Mar 17 17:49:01.367242 systemd-networkd[744]: eth1: DHCPv4 address 10.124.0.16/20 acquired from 169.254.169.253 Mar 17 17:49:01.373177 systemd-networkd[744]: eth0: DHCPv4 address 209.38.135.89/19, gateway 209.38.128.1 acquired from 169.254.169.253 Mar 17 17:49:01.382508 ignition[752]: Ignition 2.20.0 Mar 17 17:49:01.382522 ignition[752]: Stage: fetch Mar 17 17:49:01.382818 ignition[752]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:01.382833 ignition[752]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:49:01.382972 ignition[752]: parsed url from cmdline: "" Mar 17 17:49:01.382978 ignition[752]: no config URL provided Mar 17 17:49:01.382986 ignition[752]: reading system config file "/usr/lib/ignition/user.ign" Mar 17 17:49:01.382997 ignition[752]: no config at "/usr/lib/ignition/user.ign" Mar 17 17:49:01.383054 ignition[752]: GET http://169.254.169.254/metadata/v1/user-data: attempt #1 Mar 17 17:49:01.400099 ignition[752]: GET result: OK Mar 17 17:49:01.400273 ignition[752]: parsing config with SHA512: 9068e3a384621f7b53dde1217e087ae594a19d873b60b3303633f8321cd1b157b2ae10d08e2bc5adf6f16b444e06b4db15709d1276fcb3daec71c5fc64f20360 Mar 17 17:49:01.406792 unknown[752]: fetched base config from "system" Mar 17 17:49:01.406808 unknown[752]: fetched base config from "system" Mar 17 17:49:01.407289 ignition[752]: fetch: fetch complete Mar 17 17:49:01.406818 unknown[752]: fetched user config from "digitalocean" Mar 17 17:49:01.407299 ignition[752]: fetch: fetch passed Mar 17 17:49:01.410841 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Mar 17 17:49:01.407401 ignition[752]: Ignition finished successfully Mar 17 17:49:01.417377 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Mar 17 17:49:01.455326 ignition[760]: Ignition 2.20.0 Mar 17 17:49:01.455341 ignition[760]: Stage: kargs Mar 17 17:49:01.455801 ignition[760]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:01.455821 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:49:01.459434 ignition[760]: kargs: kargs passed Mar 17 17:49:01.460047 ignition[760]: Ignition finished successfully Mar 17 17:49:01.462319 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Mar 17 17:49:01.468478 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Mar 17 17:49:01.502629 ignition[766]: Ignition 2.20.0 Mar 17 17:49:01.502642 ignition[766]: Stage: disks Mar 17 17:49:01.502893 ignition[766]: no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:01.505379 systemd[1]: Finished ignition-disks.service - Ignition (disks). Mar 17 17:49:01.502909 ignition[766]: no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:49:01.503849 ignition[766]: disks: disks passed Mar 17 17:49:01.506560 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Mar 17 17:49:01.503908 ignition[766]: Ignition finished successfully Mar 17 17:49:01.510981 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:49:01.511801 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:49:01.512531 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:49:01.513255 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:49:01.530411 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Mar 17 17:49:01.548350 systemd-fsck[776]: ROOT: clean, 14/553520 files, 52654/553472 blocks Mar 17 17:49:01.552116 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Mar 17 17:49:01.558203 systemd[1]: Mounting sysroot.mount - /sysroot... Mar 17 17:49:01.676118 kernel: EXT4-fs (vda9): mounted filesystem 345fc709-8965-4219-b368-16e508c3d632 r/w with ordered data mode. Quota mode: none. Mar 17 17:49:01.677160 systemd[1]: Mounted sysroot.mount - /sysroot. Mar 17 17:49:01.678515 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Mar 17 17:49:01.695246 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:49:01.698167 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Mar 17 17:49:01.701315 systemd[1]: Starting flatcar-afterburn-network.service - Flatcar Afterburn network service... Mar 17 17:49:01.710046 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/vda6 scanned by mount (784) Mar 17 17:49:01.712474 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:01.712566 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:01.712592 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:01.715516 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Mar 17 17:49:01.717427 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Mar 17 17:49:01.721411 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:01.718148 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:49:01.733090 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:49:01.734273 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Mar 17 17:49:01.743934 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Mar 17 17:49:01.817051 coreos-metadata[787]: Mar 17 17:49:01.816 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:49:01.821800 initrd-setup-root[814]: cut: /sysroot/etc/passwd: No such file or directory Mar 17 17:49:01.828147 coreos-metadata[787]: Mar 17 17:49:01.828 INFO Fetch successful Mar 17 17:49:01.835859 coreos-metadata[787]: Mar 17 17:49:01.835 INFO wrote hostname ci-4152.2.2-e-40efa8f9ae to /sysroot/etc/hostname Mar 17 17:49:01.838917 initrd-setup-root[821]: cut: /sysroot/etc/group: No such file or directory Mar 17 17:49:01.837328 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:49:01.844411 coreos-metadata[786]: Mar 17 17:49:01.844 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:49:01.847498 initrd-setup-root[829]: cut: /sysroot/etc/shadow: No such file or directory Mar 17 17:49:01.854341 initrd-setup-root[836]: cut: /sysroot/etc/gshadow: No such file or directory Mar 17 17:49:01.857257 coreos-metadata[786]: Mar 17 17:49:01.855 INFO Fetch successful Mar 17 17:49:01.865316 systemd[1]: flatcar-afterburn-network.service: Deactivated successfully. Mar 17 17:49:01.866332 systemd[1]: Finished flatcar-afterburn-network.service - Flatcar Afterburn network service. Mar 17 17:49:01.991694 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Mar 17 17:49:01.998224 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Mar 17 17:49:02.004426 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Mar 17 17:49:02.019043 kernel: BTRFS info (device vda6): last unmount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:02.044079 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Mar 17 17:49:02.057214 systemd[1]: sysroot-oem.mount: Deactivated successfully. Mar 17 17:49:02.062083 ignition[906]: INFO : Ignition 2.20.0 Mar 17 17:49:02.062083 ignition[906]: INFO : Stage: mount Mar 17 17:49:02.062083 ignition[906]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:02.062083 ignition[906]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:49:02.065064 ignition[906]: INFO : mount: mount passed Mar 17 17:49:02.065064 ignition[906]: INFO : Ignition finished successfully Mar 17 17:49:02.066247 systemd[1]: Finished ignition-mount.service - Ignition (mount). Mar 17 17:49:02.073244 systemd[1]: Starting ignition-files.service - Ignition (files)... Mar 17 17:49:02.093522 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Mar 17 17:49:02.107080 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/vda6 scanned by mount (917) Mar 17 17:49:02.110422 kernel: BTRFS info (device vda6): first mount of filesystem 7b241d32-136b-4fe3-b105-cecff2b2cf64 Mar 17 17:49:02.110555 kernel: BTRFS info (device vda6): using crc32c (crc32c-intel) checksum algorithm Mar 17 17:49:02.110582 kernel: BTRFS info (device vda6): using free space tree Mar 17 17:49:02.117079 kernel: BTRFS info (device vda6): auto enabling async discard Mar 17 17:49:02.120777 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Mar 17 17:49:02.166267 ignition[934]: INFO : Ignition 2.20.0 Mar 17 17:49:02.166267 ignition[934]: INFO : Stage: files Mar 17 17:49:02.167644 ignition[934]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:02.167644 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:49:02.169116 ignition[934]: DEBUG : files: compiled without relabeling support, skipping Mar 17 17:49:02.169798 ignition[934]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Mar 17 17:49:02.169798 ignition[934]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Mar 17 17:49:02.175292 ignition[934]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Mar 17 17:49:02.176460 ignition[934]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Mar 17 17:49:02.177522 ignition[934]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Mar 17 17:49:02.176579 unknown[934]: wrote ssh authorized keys file for user: core Mar 17 17:49:02.179250 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:49:02.180090 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Mar 17 17:49:02.180090 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Mar 17 17:49:02.180090 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Mar 17 17:49:02.182504 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:49:02.182504 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/etc/flatcar/update.conf" Mar 17 17:49:02.182504 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:49:02.182504 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:49:02.182504 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:49:02.182504 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Mar 17 17:49:02.545146 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Mar 17 17:49:02.606351 systemd-networkd[744]: eth0: Gained IPv6LL Mar 17 17:49:02.825446 ignition[934]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Mar 17 17:49:02.825446 ignition[934]: INFO : files: op(8): [started] processing unit "containerd.service" Mar 17 17:49:02.827500 ignition[934]: INFO : files: op(8): op(9): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:49:02.827500 ignition[934]: INFO : files: op(8): op(9): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Mar 17 17:49:02.827500 ignition[934]: INFO : files: op(8): [finished] processing unit "containerd.service" Mar 17 17:49:02.827500 ignition[934]: INFO : files: createResultFile: createFiles: op(a): [started] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:49:02.827500 ignition[934]: INFO : files: createResultFile: createFiles: op(a): [finished] writing file "/sysroot/etc/.ignition-result.json" Mar 17 17:49:02.827500 ignition[934]: INFO : files: files passed Mar 17 17:49:02.827500 ignition[934]: INFO : Ignition finished successfully Mar 17 17:49:02.829527 systemd[1]: Finished ignition-files.service - Ignition (files). Mar 17 17:49:02.840466 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Mar 17 17:49:02.845047 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Mar 17 17:49:02.846325 systemd[1]: ignition-quench.service: Deactivated successfully. Mar 17 17:49:02.846472 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Mar 17 17:49:02.873294 initrd-setup-root-after-ignition[963]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:02.873294 initrd-setup-root-after-ignition[963]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:02.876729 initrd-setup-root-after-ignition[967]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Mar 17 17:49:02.879839 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:49:02.881237 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Mar 17 17:49:02.885367 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Mar 17 17:49:02.932336 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Mar 17 17:49:02.932486 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Mar 17 17:49:02.933627 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Mar 17 17:49:02.934150 systemd[1]: Reached target initrd.target - Initrd Default Target. Mar 17 17:49:02.934908 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Mar 17 17:49:02.944306 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Mar 17 17:49:02.964740 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:49:02.972361 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Mar 17 17:49:02.997052 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:02.997683 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:02.999465 systemd[1]: Stopped target timers.target - Timer Units. Mar 17 17:49:03.000836 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Mar 17 17:49:03.001116 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Mar 17 17:49:03.002347 systemd[1]: Stopped target initrd.target - Initrd Default Target. Mar 17 17:49:03.003080 systemd[1]: Stopped target basic.target - Basic System. Mar 17 17:49:03.003802 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Mar 17 17:49:03.004418 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Mar 17 17:49:03.005281 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Mar 17 17:49:03.006295 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Mar 17 17:49:03.007209 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Mar 17 17:49:03.008336 systemd[1]: Stopped target sysinit.target - System Initialization. Mar 17 17:49:03.009161 systemd[1]: Stopped target local-fs.target - Local File Systems. Mar 17 17:49:03.010076 systemd[1]: Stopped target swap.target - Swaps. Mar 17 17:49:03.010952 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Mar 17 17:49:03.011242 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Mar 17 17:49:03.012828 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:03.013603 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:03.014573 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Mar 17 17:49:03.014733 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:03.015711 systemd[1]: dracut-initqueue.service: Deactivated successfully. Mar 17 17:49:03.016036 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Mar 17 17:49:03.017536 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Mar 17 17:49:03.017737 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Mar 17 17:49:03.018796 systemd[1]: ignition-files.service: Deactivated successfully. Mar 17 17:49:03.018924 systemd[1]: Stopped ignition-files.service - Ignition (files). Mar 17 17:49:03.019650 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Mar 17 17:49:03.019814 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Mar 17 17:49:03.028374 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Mar 17 17:49:03.033333 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Mar 17 17:49:03.034161 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Mar 17 17:49:03.034355 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:03.034901 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Mar 17 17:49:03.036307 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Mar 17 17:49:03.043497 systemd[1]: initrd-cleanup.service: Deactivated successfully. Mar 17 17:49:03.045402 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Mar 17 17:49:03.058861 ignition[987]: INFO : Ignition 2.20.0 Mar 17 17:49:03.058861 ignition[987]: INFO : Stage: umount Mar 17 17:49:03.060181 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Mar 17 17:49:03.060181 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/digitalocean" Mar 17 17:49:03.061797 ignition[987]: INFO : umount: umount passed Mar 17 17:49:03.061797 ignition[987]: INFO : Ignition finished successfully Mar 17 17:49:03.064796 systemd[1]: ignition-mount.service: Deactivated successfully. Mar 17 17:49:03.064981 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Mar 17 17:49:03.065684 systemd[1]: ignition-disks.service: Deactivated successfully. Mar 17 17:49:03.065736 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Mar 17 17:49:03.066767 systemd[1]: ignition-kargs.service: Deactivated successfully. Mar 17 17:49:03.066821 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Mar 17 17:49:03.067328 systemd[1]: ignition-fetch.service: Deactivated successfully. Mar 17 17:49:03.067382 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Mar 17 17:49:03.067863 systemd[1]: Stopped target network.target - Network. Mar 17 17:49:03.068232 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Mar 17 17:49:03.068297 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Mar 17 17:49:03.069144 systemd[1]: Stopped target paths.target - Path Units. Mar 17 17:49:03.072227 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Mar 17 17:49:03.074467 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:03.075234 systemd[1]: Stopped target slices.target - Slice Units. Mar 17 17:49:03.075852 systemd[1]: Stopped target sockets.target - Socket Units. Mar 17 17:49:03.079340 systemd[1]: iscsid.socket: Deactivated successfully. Mar 17 17:49:03.079403 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Mar 17 17:49:03.080202 systemd[1]: iscsiuio.socket: Deactivated successfully. Mar 17 17:49:03.080251 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Mar 17 17:49:03.080609 systemd[1]: ignition-setup.service: Deactivated successfully. Mar 17 17:49:03.080665 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Mar 17 17:49:03.102777 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Mar 17 17:49:03.103153 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Mar 17 17:49:03.103951 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Mar 17 17:49:03.104437 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Mar 17 17:49:03.106343 systemd[1]: sysroot-boot.mount: Deactivated successfully. Mar 17 17:49:03.107339 systemd[1]: sysroot-boot.service: Deactivated successfully. Mar 17 17:49:03.107470 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Mar 17 17:49:03.108774 systemd[1]: initrd-setup-root.service: Deactivated successfully. Mar 17 17:49:03.108899 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Mar 17 17:49:03.111109 systemd-networkd[744]: eth0: DHCPv6 lease lost Mar 17 17:49:03.114381 systemd[1]: systemd-resolved.service: Deactivated successfully. Mar 17 17:49:03.114556 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Mar 17 17:49:03.117140 systemd-networkd[744]: eth1: DHCPv6 lease lost Mar 17 17:49:03.118547 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Mar 17 17:49:03.118625 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:03.120316 systemd[1]: systemd-networkd.service: Deactivated successfully. Mar 17 17:49:03.120526 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Mar 17 17:49:03.122256 systemd[1]: systemd-networkd.socket: Deactivated successfully. Mar 17 17:49:03.122347 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:03.128234 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Mar 17 17:49:03.128653 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Mar 17 17:49:03.128736 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Mar 17 17:49:03.129182 systemd[1]: systemd-sysctl.service: Deactivated successfully. Mar 17 17:49:03.129228 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:03.130122 systemd[1]: systemd-modules-load.service: Deactivated successfully. Mar 17 17:49:03.130210 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:03.132841 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:03.145553 systemd[1]: systemd-udevd.service: Deactivated successfully. Mar 17 17:49:03.146514 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:03.149762 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Mar 17 17:49:03.150552 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:03.151728 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Mar 17 17:49:03.151783 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:03.152437 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Mar 17 17:49:03.152508 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Mar 17 17:49:03.153080 systemd[1]: dracut-cmdline.service: Deactivated successfully. Mar 17 17:49:03.153129 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Mar 17 17:49:03.154184 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Mar 17 17:49:03.154244 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Mar 17 17:49:03.159636 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Mar 17 17:49:03.160150 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Mar 17 17:49:03.160235 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:03.160811 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Mar 17 17:49:03.160863 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:03.161385 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Mar 17 17:49:03.161442 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:03.162173 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:03.162221 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:03.163602 systemd[1]: network-cleanup.service: Deactivated successfully. Mar 17 17:49:03.163803 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Mar 17 17:49:03.181246 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Mar 17 17:49:03.181410 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Mar 17 17:49:03.183436 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Mar 17 17:49:03.188372 systemd[1]: Starting initrd-switch-root.service - Switch Root... Mar 17 17:49:03.201359 systemd[1]: Switching root. Mar 17 17:49:03.234545 systemd-journald[182]: Journal stopped Mar 17 17:49:04.497518 systemd-journald[182]: Received SIGTERM from PID 1 (systemd). Mar 17 17:49:04.497623 kernel: SELinux: policy capability network_peer_controls=1 Mar 17 17:49:04.497645 kernel: SELinux: policy capability open_perms=1 Mar 17 17:49:04.497671 kernel: SELinux: policy capability extended_socket_class=1 Mar 17 17:49:04.497688 kernel: SELinux: policy capability always_check_network=0 Mar 17 17:49:04.497705 kernel: SELinux: policy capability cgroup_seclabel=1 Mar 17 17:49:04.497723 kernel: SELinux: policy capability nnp_nosuid_transition=1 Mar 17 17:49:04.497739 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Mar 17 17:49:04.497755 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Mar 17 17:49:04.497770 kernel: audit: type=1403 audit(1742233743.459:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Mar 17 17:49:04.497788 systemd[1]: Successfully loaded SELinux policy in 47.407ms. Mar 17 17:49:04.497835 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 17.293ms. Mar 17 17:49:04.497856 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Mar 17 17:49:04.497873 systemd[1]: Detected virtualization kvm. Mar 17 17:49:04.497891 systemd[1]: Detected architecture x86-64. Mar 17 17:49:04.497916 systemd[1]: Detected first boot. Mar 17 17:49:04.497933 systemd[1]: Hostname set to . Mar 17 17:49:04.497951 systemd[1]: Initializing machine ID from VM UUID. Mar 17 17:49:04.497970 zram_generator::config[1050]: No configuration found. Mar 17 17:49:04.497992 systemd[1]: Populated /etc with preset unit settings. Mar 17 17:49:04.498050 systemd[1]: Queued start job for default target multi-user.target. Mar 17 17:49:04.498068 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Mar 17 17:49:04.498086 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Mar 17 17:49:04.498103 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Mar 17 17:49:04.498120 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Mar 17 17:49:04.498139 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Mar 17 17:49:04.498186 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Mar 17 17:49:04.498205 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Mar 17 17:49:04.498231 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Mar 17 17:49:04.498251 systemd[1]: Created slice user.slice - User and Session Slice. Mar 17 17:49:04.498269 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Mar 17 17:49:04.498289 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Mar 17 17:49:04.498308 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Mar 17 17:49:04.498327 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Mar 17 17:49:04.498355 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Mar 17 17:49:04.498373 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Mar 17 17:49:04.498392 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Mar 17 17:49:04.498414 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Mar 17 17:49:04.498434 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Mar 17 17:49:04.498452 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Mar 17 17:49:04.498472 systemd[1]: Reached target remote-fs.target - Remote File Systems. Mar 17 17:49:04.498491 systemd[1]: Reached target slices.target - Slice Units. Mar 17 17:49:04.498509 systemd[1]: Reached target swap.target - Swaps. Mar 17 17:49:04.498534 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Mar 17 17:49:04.498552 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Mar 17 17:49:04.498570 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Mar 17 17:49:04.498590 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Mar 17 17:49:04.498626 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Mar 17 17:49:04.498643 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Mar 17 17:49:04.498662 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Mar 17 17:49:04.498689 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Mar 17 17:49:04.498707 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Mar 17 17:49:04.498729 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Mar 17 17:49:04.498750 systemd[1]: Mounting media.mount - External Media Directory... Mar 17 17:49:04.498771 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:04.498790 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Mar 17 17:49:04.498808 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Mar 17 17:49:04.498827 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Mar 17 17:49:04.498846 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Mar 17 17:49:04.498865 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:04.498885 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Mar 17 17:49:04.498909 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Mar 17 17:49:04.498929 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:04.498950 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:49:04.498969 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:04.498986 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Mar 17 17:49:04.499003 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:04.499064 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:49:04.499083 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Mar 17 17:49:04.499108 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Mar 17 17:49:04.499141 systemd[1]: Starting systemd-journald.service - Journal Service... Mar 17 17:49:04.499158 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Mar 17 17:49:04.499176 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Mar 17 17:49:04.499194 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Mar 17 17:49:04.499212 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Mar 17 17:49:04.499232 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:04.499252 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Mar 17 17:49:04.499277 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Mar 17 17:49:04.499295 systemd[1]: Mounted media.mount - External Media Directory. Mar 17 17:49:04.499313 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Mar 17 17:49:04.499331 kernel: loop: module loaded Mar 17 17:49:04.499353 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Mar 17 17:49:04.499372 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Mar 17 17:49:04.499390 kernel: fuse: init (API version 7.39) Mar 17 17:49:04.499409 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Mar 17 17:49:04.499479 systemd-journald[1144]: Collecting audit messages is disabled. Mar 17 17:49:04.499545 systemd[1]: modprobe@configfs.service: Deactivated successfully. Mar 17 17:49:04.499568 systemd-journald[1144]: Journal started Mar 17 17:49:04.499605 systemd-journald[1144]: Runtime Journal (/run/log/journal/a69c2b688539476da286bb5fd4d98a49) is 4.9M, max 39.3M, 34.4M free. Mar 17 17:49:04.505040 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Mar 17 17:49:04.505139 systemd[1]: Started systemd-journald.service - Journal Service. Mar 17 17:49:04.508660 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:04.508898 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:04.509601 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:04.509795 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:04.510537 systemd[1]: modprobe@fuse.service: Deactivated successfully. Mar 17 17:49:04.510759 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Mar 17 17:49:04.511501 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:04.511683 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:04.512429 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Mar 17 17:49:04.513314 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Mar 17 17:49:04.527446 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Mar 17 17:49:04.532975 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Mar 17 17:49:04.541920 systemd[1]: Reached target network-pre.target - Preparation for Network. Mar 17 17:49:04.543022 kernel: ACPI: bus type drm_connector registered Mar 17 17:49:04.551372 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Mar 17 17:49:04.560177 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Mar 17 17:49:04.560804 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:49:04.577416 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Mar 17 17:49:04.580709 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Mar 17 17:49:04.581147 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:04.588830 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Mar 17 17:49:04.589435 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:04.608236 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Mar 17 17:49:04.615410 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Mar 17 17:49:04.631175 systemd-journald[1144]: Time spent on flushing to /var/log/journal/a69c2b688539476da286bb5fd4d98a49 is 39.368ms for 955 entries. Mar 17 17:49:04.631175 systemd-journald[1144]: System Journal (/var/log/journal/a69c2b688539476da286bb5fd4d98a49) is 8.0M, max 195.6M, 187.6M free. Mar 17 17:49:04.697529 systemd-journald[1144]: Received client request to flush runtime journal. Mar 17 17:49:04.630730 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:49:04.632967 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:49:04.644412 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Mar 17 17:49:04.645077 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Mar 17 17:49:04.661696 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Mar 17 17:49:04.662384 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Mar 17 17:49:04.705220 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Mar 17 17:49:04.721757 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Mar 17 17:49:04.729951 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 17 17:49:04.733104 systemd-tmpfiles[1187]: ACLs are not supported, ignoring. Mar 17 17:49:04.741312 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Mar 17 17:49:04.755304 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Mar 17 17:49:04.756635 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Mar 17 17:49:04.778209 systemd[1]: Starting systemd-sysusers.service - Create System Users... Mar 17 17:49:04.796312 udevadm[1204]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Mar 17 17:49:04.831458 systemd[1]: Finished systemd-sysusers.service - Create System Users. Mar 17 17:49:04.842428 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Mar 17 17:49:04.873943 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Mar 17 17:49:04.873979 systemd-tmpfiles[1211]: ACLs are not supported, ignoring. Mar 17 17:49:04.885906 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Mar 17 17:49:05.628717 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Mar 17 17:49:05.640349 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Mar 17 17:49:05.673952 systemd-udevd[1217]: Using default interface naming scheme 'v255'. Mar 17 17:49:05.700203 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Mar 17 17:49:05.711234 systemd[1]: Starting systemd-networkd.service - Network Configuration... Mar 17 17:49:05.744264 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Mar 17 17:49:05.777378 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Mar 17 17:49:05.837457 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:05.837693 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:05.846298 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:05.860323 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:05.875546 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:05.878159 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Mar 17 17:49:05.878227 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Mar 17 17:49:05.878305 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:05.878869 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:05.884276 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:05.899440 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:05.899701 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:05.904806 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:05.906844 systemd[1]: Started systemd-userdbd.service - User Database Manager. Mar 17 17:49:05.911400 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:05.914317 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:05.920327 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:05.948051 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1225) Mar 17 17:49:05.958044 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Mar 17 17:49:05.972060 kernel: ACPI: button: Power Button [PWRF] Mar 17 17:49:05.978033 kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0 Mar 17 17:49:06.051232 systemd-networkd[1222]: lo: Link UP Mar 17 17:49:06.051247 systemd-networkd[1222]: lo: Gained carrier Mar 17 17:49:06.055912 systemd-networkd[1222]: Enumeration completed Mar 17 17:49:06.056634 systemd-networkd[1222]: eth0: Configuring with /run/systemd/network/10-06:1e:cc:a9:83:6a.network. Mar 17 17:49:06.057828 systemd-networkd[1222]: eth1: Configuring with /run/systemd/network/10-12:c2:d5:cc:9e:5a.network. Mar 17 17:49:06.059056 systemd-networkd[1222]: eth0: Link UP Mar 17 17:49:06.059154 systemd-networkd[1222]: eth0: Gained carrier Mar 17 17:49:06.063138 systemd[1]: Started systemd-networkd.service - Network Configuration. Mar 17 17:49:06.064421 systemd-networkd[1222]: eth1: Link UP Mar 17 17:49:06.065118 systemd-networkd[1222]: eth1: Gained carrier Mar 17 17:49:06.075385 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Mar 17 17:49:06.128696 kernel: [drm] pci: virtio-vga detected at 0000:00:02.0 Mar 17 17:49:06.128817 kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console Mar 17 17:49:06.137634 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Mar 17 17:49:06.158185 kernel: Console: switching to colour dummy device 80x25 Mar 17 17:49:06.166094 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Mar 17 17:49:06.166249 kernel: [drm] features: -context_init Mar 17 17:49:06.183041 kernel: [drm] number of scanouts: 1 Mar 17 17:49:06.183193 kernel: [drm] number of cap sets: 0 Mar 17 17:49:06.182520 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:06.199042 kernel: [drm] Initialized virtio_gpu 0.1.0 0 for 0000:00:02.0 on minor 0 Mar 17 17:49:06.219584 kernel: mousedev: PS/2 mouse device common for all mice Mar 17 17:49:06.229044 kernel: fbcon: virtio_gpudrmfb (fb0) is primary device Mar 17 17:49:06.229162 kernel: Console: switching to colour frame buffer device 128x48 Mar 17 17:49:06.294555 kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device Mar 17 17:49:06.273567 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:06.273969 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:06.321951 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:06.336615 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Mar 17 17:49:06.356609 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Mar 17 17:49:06.357079 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:06.384320 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Mar 17 17:49:06.403083 kernel: EDAC MC: Ver: 3.0.0 Mar 17 17:49:06.432312 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Mar 17 17:49:06.440681 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Mar 17 17:49:06.459166 lvm[1279]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:49:06.494199 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Mar 17 17:49:06.495410 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Mar 17 17:49:06.509573 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Mar 17 17:49:06.514940 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Mar 17 17:49:06.519319 lvm[1285]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Mar 17 17:49:06.554304 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Mar 17 17:49:06.556528 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Mar 17 17:49:06.564265 systemd[1]: Mounting media-configdrive.mount - /media/configdrive... Mar 17 17:49:06.564456 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Mar 17 17:49:06.564504 systemd[1]: Reached target machines.target - Containers. Mar 17 17:49:06.567331 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Mar 17 17:49:06.591703 kernel: ISO 9660 Extensions: RRIP_1991A Mar 17 17:49:06.593173 systemd[1]: Mounted media-configdrive.mount - /media/configdrive. Mar 17 17:49:06.595944 systemd[1]: Reached target local-fs.target - Local File Systems. Mar 17 17:49:06.598166 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Mar 17 17:49:06.607396 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Mar 17 17:49:06.615420 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Mar 17 17:49:06.621248 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:06.630124 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Mar 17 17:49:06.644401 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Mar 17 17:49:06.655063 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Mar 17 17:49:06.657551 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Mar 17 17:49:06.681063 kernel: loop0: detected capacity change from 0 to 8 Mar 17 17:49:06.673966 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Mar 17 17:49:06.678233 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Mar 17 17:49:06.694437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Mar 17 17:49:06.725469 kernel: loop1: detected capacity change from 0 to 138184 Mar 17 17:49:06.787638 kernel: loop2: detected capacity change from 0 to 210664 Mar 17 17:49:06.830716 kernel: loop3: detected capacity change from 0 to 140992 Mar 17 17:49:06.892427 kernel: loop4: detected capacity change from 0 to 8 Mar 17 17:49:06.897281 kernel: loop5: detected capacity change from 0 to 138184 Mar 17 17:49:06.920450 kernel: loop6: detected capacity change from 0 to 210664 Mar 17 17:49:06.941175 kernel: loop7: detected capacity change from 0 to 140992 Mar 17 17:49:06.961646 (sd-merge)[1312]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-digitalocean'. Mar 17 17:49:06.964065 (sd-merge)[1312]: Merged extensions into '/usr'. Mar 17 17:49:06.970675 systemd[1]: Reloading requested from client PID 1300 ('systemd-sysext') (unit systemd-sysext.service)... Mar 17 17:49:06.970706 systemd[1]: Reloading... Mar 17 17:49:07.086359 systemd-networkd[1222]: eth1: Gained IPv6LL Mar 17 17:49:07.130186 zram_generator::config[1341]: No configuration found. Mar 17 17:49:07.299701 ldconfig[1297]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Mar 17 17:49:07.399681 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:49:07.513716 systemd[1]: Reloading finished in 542 ms. Mar 17 17:49:07.538584 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Mar 17 17:49:07.541403 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Mar 17 17:49:07.542658 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Mar 17 17:49:07.556497 systemd[1]: Starting ensure-sysext.service... Mar 17 17:49:07.561363 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Mar 17 17:49:07.574762 systemd[1]: Reloading requested from client PID 1392 ('systemctl') (unit ensure-sysext.service)... Mar 17 17:49:07.574785 systemd[1]: Reloading... Mar 17 17:49:07.605519 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Mar 17 17:49:07.606747 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Mar 17 17:49:07.608509 systemd-tmpfiles[1393]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Mar 17 17:49:07.609177 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Mar 17 17:49:07.609417 systemd-tmpfiles[1393]: ACLs are not supported, ignoring. Mar 17 17:49:07.614235 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:49:07.614463 systemd-tmpfiles[1393]: Skipping /boot Mar 17 17:49:07.629647 systemd-tmpfiles[1393]: Detected autofs mount point /boot during canonicalization of boot. Mar 17 17:49:07.630267 systemd-tmpfiles[1393]: Skipping /boot Mar 17 17:49:07.698044 zram_generator::config[1422]: No configuration found. Mar 17 17:49:07.726342 systemd-networkd[1222]: eth0: Gained IPv6LL Mar 17 17:49:07.876323 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:49:07.943681 systemd[1]: Reloading finished in 368 ms. Mar 17 17:49:07.966941 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Mar 17 17:49:07.990301 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:49:08.016580 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Mar 17 17:49:08.029350 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Mar 17 17:49:08.045453 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Mar 17 17:49:08.063320 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Mar 17 17:49:08.081818 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:08.082621 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:08.091979 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:08.108191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:08.121518 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:08.123743 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:08.127078 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:08.131005 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:08.135476 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:08.150342 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:08.150621 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:08.162232 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Mar 17 17:49:08.176324 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:08.193461 systemd[1]: Starting systemd-update-done.service - Update is Completed... Mar 17 17:49:08.200246 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Mar 17 17:49:08.208989 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Mar 17 17:49:08.222501 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:08.222982 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:08.243155 augenrules[1512]: No rules Mar 17 17:49:08.245516 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:49:08.245945 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:49:08.256887 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:08.260633 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:08.267589 systemd-resolved[1481]: Positive Trust Anchors: Mar 17 17:49:08.268113 systemd-resolved[1481]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Mar 17 17:49:08.268216 systemd-resolved[1481]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Mar 17 17:49:08.275581 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:08.276114 systemd-resolved[1481]: Using system hostname 'ci-4152.2.2-e-40efa8f9ae'. Mar 17 17:49:08.290645 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:08.300531 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:08.303913 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:08.304167 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:49:08.304263 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:08.308467 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Mar 17 17:49:08.310822 systemd[1]: Finished systemd-update-done.service - Update is Completed. Mar 17 17:49:08.314589 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:08.316245 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:08.320449 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:08.320897 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:08.323664 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:08.324405 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:08.334839 systemd[1]: Reached target network.target - Network. Mar 17 17:49:08.337319 systemd[1]: Reached target network-online.target - Network is Online. Mar 17 17:49:08.338971 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Mar 17 17:49:08.340182 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:08.340439 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:08.345544 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:08.353682 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:49:08.357197 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Mar 17 17:49:08.365750 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Mar 17 17:49:08.382282 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Mar 17 17:49:08.397499 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Mar 17 17:49:08.417581 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Mar 17 17:49:08.422797 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Mar 17 17:49:08.423185 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Mar 17 17:49:08.423338 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Mar 17 17:49:08.429711 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Mar 17 17:49:08.429969 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Mar 17 17:49:08.434982 augenrules[1534]: /sbin/augenrules: No change Mar 17 17:49:08.437648 systemd[1]: modprobe@drm.service: Deactivated successfully. Mar 17 17:49:08.437898 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Mar 17 17:49:08.445793 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Mar 17 17:49:08.446198 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Mar 17 17:49:08.450038 systemd[1]: modprobe@loop.service: Deactivated successfully. Mar 17 17:49:08.452553 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Mar 17 17:49:08.465638 systemd[1]: Finished ensure-sysext.service. Mar 17 17:49:08.475218 augenrules[1565]: No rules Mar 17 17:49:08.478115 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:49:08.478601 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:49:08.485046 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Mar 17 17:49:08.485200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Mar 17 17:49:08.493445 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Mar 17 17:49:08.572797 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Mar 17 17:49:08.574212 systemd[1]: Reached target sysinit.target - System Initialization. Mar 17 17:49:08.576419 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Mar 17 17:49:08.577482 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Mar 17 17:49:08.579536 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Mar 17 17:49:08.580486 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Mar 17 17:49:08.580608 systemd[1]: Reached target paths.target - Path Units. Mar 17 17:49:08.581299 systemd[1]: Reached target time-set.target - System Time Set. Mar 17 17:49:08.582377 systemd[1]: Started logrotate.timer - Daily rotation of log files. Mar 17 17:49:08.583684 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Mar 17 17:49:08.584570 systemd[1]: Reached target timers.target - Timer Units. Mar 17 17:49:08.588105 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Mar 17 17:49:08.592864 systemd[1]: Starting docker.socket - Docker Socket for the API... Mar 17 17:49:09.327834 systemd-resolved[1481]: Clock change detected. Flushing caches. Mar 17 17:49:09.327926 systemd-timesyncd[1574]: Contacted time server 104.234.61.117:123 (0.flatcar.pool.ntp.org). Mar 17 17:49:09.327997 systemd-timesyncd[1574]: Initial clock synchronization to Mon 2025-03-17 17:49:09.327731 UTC. Mar 17 17:49:09.330811 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Mar 17 17:49:09.333781 systemd[1]: Listening on docker.socket - Docker Socket for the API. Mar 17 17:49:09.335765 systemd[1]: Reached target sockets.target - Socket Units. Mar 17 17:49:09.336590 systemd[1]: Reached target basic.target - Basic System. Mar 17 17:49:09.338797 systemd[1]: System is tainted: cgroupsv1 Mar 17 17:49:09.338927 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:49:09.338966 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Mar 17 17:49:09.346945 systemd[1]: Starting containerd.service - containerd container runtime... Mar 17 17:49:09.366097 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Mar 17 17:49:09.375144 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Mar 17 17:49:09.383947 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Mar 17 17:49:09.401367 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Mar 17 17:49:09.403926 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Mar 17 17:49:09.420805 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:09.422832 coreos-metadata[1580]: Mar 17 17:49:09.420 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:49:09.428021 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Mar 17 17:49:09.445780 coreos-metadata[1580]: Mar 17 17:49:09.445 INFO Fetch successful Mar 17 17:49:09.450767 jq[1584]: false Mar 17 17:49:09.451937 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Mar 17 17:49:09.464860 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Mar 17 17:49:09.483826 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Mar 17 17:49:09.502187 dbus-daemon[1581]: [system] SELinux support is enabled Mar 17 17:49:09.512933 systemd[1]: Starting systemd-logind.service - User Login Management... Mar 17 17:49:09.514380 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Mar 17 17:49:09.521320 extend-filesystems[1585]: Found loop4 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found loop5 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found loop6 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found loop7 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found vda Mar 17 17:49:09.526986 extend-filesystems[1585]: Found vda1 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found vda2 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found vda3 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found usr Mar 17 17:49:09.526986 extend-filesystems[1585]: Found vda4 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found vda6 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found vda7 Mar 17 17:49:09.526986 extend-filesystems[1585]: Found vda9 Mar 17 17:49:09.526986 extend-filesystems[1585]: Checking size of /dev/vda9 Mar 17 17:49:09.532156 systemd[1]: Starting update-engine.service - Update Engine... Mar 17 17:49:09.559981 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Mar 17 17:49:09.583637 update_engine[1604]: I20250317 17:49:09.583171 1604 main.cc:92] Flatcar Update Engine starting Mar 17 17:49:09.584825 systemd[1]: Started dbus.service - D-Bus System Message Bus. Mar 17 17:49:09.602962 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Mar 17 17:49:09.619137 update_engine[1604]: I20250317 17:49:09.606158 1604 update_check_scheduler.cc:74] Next update check in 9m48s Mar 17 17:49:09.603345 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Mar 17 17:49:09.610382 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Mar 17 17:49:09.610853 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Mar 17 17:49:09.628981 extend-filesystems[1585]: Resized partition /dev/vda9 Mar 17 17:49:09.638340 systemd[1]: motdgen.service: Deactivated successfully. Mar 17 17:49:09.638708 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Mar 17 17:49:09.658310 extend-filesystems[1624]: resize2fs 1.47.1 (20-May-2024) Mar 17 17:49:09.661569 jq[1611]: true Mar 17 17:49:09.683372 kernel: EXT4-fs (vda9): resizing filesystem from 553472 to 15121403 blocks Mar 17 17:49:09.687354 (ntainerd)[1626]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Mar 17 17:49:09.697582 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Mar 17 17:49:09.709849 systemd[1]: Started update-engine.service - Update Engine. Mar 17 17:49:09.733283 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Mar 17 17:49:09.739157 jq[1629]: true Mar 17 17:49:09.738925 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Mar 17 17:49:09.738998 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Mar 17 17:49:09.744406 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Mar 17 17:49:09.744603 systemd[1]: user-configdrive.service - Load cloud-config from /media/configdrive was skipped because of an unmet condition check (ConditionKernelCommandLine=!flatcar.oem.id=digitalocean). Mar 17 17:49:09.744643 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Mar 17 17:49:09.767197 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Mar 17 17:49:09.784663 systemd[1]: Started locksmithd.service - Cluster reboot manager. Mar 17 17:49:09.805265 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Mar 17 17:49:09.819348 kernel: EXT4-fs (vda9): resized filesystem to 15121403 Mar 17 17:49:09.829358 extend-filesystems[1624]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Mar 17 17:49:09.829358 extend-filesystems[1624]: old_desc_blocks = 1, new_desc_blocks = 8 Mar 17 17:49:09.829358 extend-filesystems[1624]: The filesystem on /dev/vda9 is now 15121403 (4k) blocks long. Mar 17 17:49:09.833633 extend-filesystems[1585]: Resized filesystem in /dev/vda9 Mar 17 17:49:09.833633 extend-filesystems[1585]: Found vdb Mar 17 17:49:09.886575 systemd[1]: extend-filesystems.service: Deactivated successfully. Mar 17 17:49:09.887117 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Mar 17 17:49:09.900321 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Mar 17 17:49:10.004035 bash[1668]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:49:10.010057 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Mar 17 17:49:10.038740 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (1649) Mar 17 17:49:10.026079 systemd[1]: Starting sshkeys.service... Mar 17 17:49:10.135424 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Mar 17 17:49:10.150117 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Mar 17 17:49:10.172345 systemd-logind[1598]: New seat seat0. Mar 17 17:49:10.181719 systemd-logind[1598]: Watching system buttons on /dev/input/event1 (Power Button) Mar 17 17:49:10.181853 systemd-logind[1598]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Mar 17 17:49:10.182390 systemd[1]: Started systemd-logind.service - User Login Management. Mar 17 17:49:10.258894 locksmithd[1641]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Mar 17 17:49:10.272905 coreos-metadata[1679]: Mar 17 17:49:10.270 INFO Fetching http://169.254.169.254/metadata/v1.json: Attempt #1 Mar 17 17:49:10.287181 coreos-metadata[1679]: Mar 17 17:49:10.286 INFO Fetch successful Mar 17 17:49:10.304439 unknown[1679]: wrote ssh authorized keys file for user: core Mar 17 17:49:10.376613 update-ssh-keys[1690]: Updated "/home/core/.ssh/authorized_keys" Mar 17 17:49:10.377701 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Mar 17 17:49:10.390663 systemd[1]: Finished sshkeys.service. Mar 17 17:49:10.441890 sshd_keygen[1610]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Mar 17 17:49:10.449319 containerd[1626]: time="2025-03-17T17:49:10.449156233Z" level=info msg="starting containerd" revision=9b2ad7760328148397346d10c7b2004271249db4 version=v1.7.23 Mar 17 17:49:10.505296 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Mar 17 17:49:10.509612 containerd[1626]: time="2025-03-17T17:49:10.509434632Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:10.514017 containerd[1626]: time="2025-03-17T17:49:10.513948391Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.83-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:10.514737 containerd[1626]: time="2025-03-17T17:49:10.514181469Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Mar 17 17:49:10.514737 containerd[1626]: time="2025-03-17T17:49:10.514217660Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Mar 17 17:49:10.514737 containerd[1626]: time="2025-03-17T17:49:10.514450939Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Mar 17 17:49:10.514737 containerd[1626]: time="2025-03-17T17:49:10.514475124Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:10.514737 containerd[1626]: time="2025-03-17T17:49:10.514556957Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:10.514737 containerd[1626]: time="2025-03-17T17:49:10.514576152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:10.515360 containerd[1626]: time="2025-03-17T17:49:10.515322711Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:10.515718 containerd[1626]: time="2025-03-17T17:49:10.515461607Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:10.515718 containerd[1626]: time="2025-03-17T17:49:10.515502300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:10.515718 containerd[1626]: time="2025-03-17T17:49:10.515517976Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:10.515718 containerd[1626]: time="2025-03-17T17:49:10.515656442Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:10.517706 containerd[1626]: time="2025-03-17T17:49:10.516966519Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Mar 17 17:49:10.518119 containerd[1626]: time="2025-03-17T17:49:10.518086560Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Mar 17 17:49:10.518119 containerd[1626]: time="2025-03-17T17:49:10.518117487Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Mar 17 17:49:10.518260 containerd[1626]: time="2025-03-17T17:49:10.518243361Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Mar 17 17:49:10.518326 containerd[1626]: time="2025-03-17T17:49:10.518309328Z" level=info msg="metadata content store policy set" policy=shared Mar 17 17:49:10.521451 systemd[1]: Starting issuegen.service - Generate /run/issue... Mar 17 17:49:10.527615 containerd[1626]: time="2025-03-17T17:49:10.527522750Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Mar 17 17:49:10.527965 containerd[1626]: time="2025-03-17T17:49:10.527763643Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Mar 17 17:49:10.527965 containerd[1626]: time="2025-03-17T17:49:10.527942185Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Mar 17 17:49:10.528246 containerd[1626]: time="2025-03-17T17:49:10.528102948Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Mar 17 17:49:10.528246 containerd[1626]: time="2025-03-17T17:49:10.528137237Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Mar 17 17:49:10.528662 containerd[1626]: time="2025-03-17T17:49:10.528539224Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529314409Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529499039Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529519096Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529534908Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529550350Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529564239Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529578021Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529593872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529608843Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529622815Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Mar 17 17:49:10.529712 containerd[1626]: time="2025-03-17T17:49:10.529636436Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Mar 17 17:49:10.530038 containerd[1626]: time="2025-03-17T17:49:10.530018593Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Mar 17 17:49:10.530148 containerd[1626]: time="2025-03-17T17:49:10.530131687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.530248 containerd[1626]: time="2025-03-17T17:49:10.530230619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.530349 containerd[1626]: time="2025-03-17T17:49:10.530335625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.530432 containerd[1626]: time="2025-03-17T17:49:10.530420537Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.530562 containerd[1626]: time="2025-03-17T17:49:10.530501295Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.530562 containerd[1626]: time="2025-03-17T17:49:10.530519711Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.530562 containerd[1626]: time="2025-03-17T17:49:10.530535795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.530668 containerd[1626]: time="2025-03-17T17:49:10.530654360Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.531962 containerd[1626]: time="2025-03-17T17:49:10.531648416Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.532162 containerd[1626]: time="2025-03-17T17:49:10.532139732Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.534451 containerd[1626]: time="2025-03-17T17:49:10.532195807Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.534451 containerd[1626]: time="2025-03-17T17:49:10.532213409Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.534451 containerd[1626]: time="2025-03-17T17:49:10.532227477Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.534451 containerd[1626]: time="2025-03-17T17:49:10.532242831Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Mar 17 17:49:10.534107 systemd[1]: Started sshd@0-209.38.135.89:22-139.178.89.65:37672.service - OpenSSH per-connection server daemon (139.178.89.65:37672). Mar 17 17:49:10.540477 containerd[1626]: time="2025-03-17T17:49:10.539550995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.540477 containerd[1626]: time="2025-03-17T17:49:10.539668291Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.540477 containerd[1626]: time="2025-03-17T17:49:10.540241783Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Mar 17 17:49:10.540477 containerd[1626]: time="2025-03-17T17:49:10.540368484Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Mar 17 17:49:10.540477 containerd[1626]: time="2025-03-17T17:49:10.540416207Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Mar 17 17:49:10.540477 containerd[1626]: time="2025-03-17T17:49:10.540435808Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Mar 17 17:49:10.541074 containerd[1626]: time="2025-03-17T17:49:10.540989321Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Mar 17 17:49:10.541339 containerd[1626]: time="2025-03-17T17:49:10.541048781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.541339 containerd[1626]: time="2025-03-17T17:49:10.541303027Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Mar 17 17:49:10.542817 containerd[1626]: time="2025-03-17T17:49:10.541569606Z" level=info msg="NRI interface is disabled by configuration." Mar 17 17:49:10.542817 containerd[1626]: time="2025-03-17T17:49:10.541605611Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Mar 17 17:49:10.543745 containerd[1626]: time="2025-03-17T17:49:10.543619881Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Mar 17 17:49:10.544971 containerd[1626]: time="2025-03-17T17:49:10.544783090Z" level=info msg="Connect containerd service" Mar 17 17:49:10.545364 containerd[1626]: time="2025-03-17T17:49:10.545322012Z" level=info msg="using legacy CRI server" Mar 17 17:49:10.547305 containerd[1626]: time="2025-03-17T17:49:10.546745535Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Mar 17 17:49:10.547305 containerd[1626]: time="2025-03-17T17:49:10.547082816Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Mar 17 17:49:10.551376 containerd[1626]: time="2025-03-17T17:49:10.551298299Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:49:10.551780 containerd[1626]: time="2025-03-17T17:49:10.551516364Z" level=info msg="Start subscribing containerd event" Mar 17 17:49:10.551780 containerd[1626]: time="2025-03-17T17:49:10.551598781Z" level=info msg="Start recovering state" Mar 17 17:49:10.551780 containerd[1626]: time="2025-03-17T17:49:10.551719345Z" level=info msg="Start event monitor" Mar 17 17:49:10.551780 containerd[1626]: time="2025-03-17T17:49:10.551739458Z" level=info msg="Start snapshots syncer" Mar 17 17:49:10.551780 containerd[1626]: time="2025-03-17T17:49:10.551756149Z" level=info msg="Start cni network conf syncer for default" Mar 17 17:49:10.551780 containerd[1626]: time="2025-03-17T17:49:10.551769641Z" level=info msg="Start streaming server" Mar 17 17:49:10.553251 containerd[1626]: time="2025-03-17T17:49:10.553193221Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Mar 17 17:49:10.553534 containerd[1626]: time="2025-03-17T17:49:10.553297063Z" level=info msg=serving... address=/run/containerd/containerd.sock Mar 17 17:49:10.553534 containerd[1626]: time="2025-03-17T17:49:10.553404829Z" level=info msg="containerd successfully booted in 0.105683s" Mar 17 17:49:10.554071 systemd[1]: Started containerd.service - containerd container runtime. Mar 17 17:49:10.574412 systemd[1]: issuegen.service: Deactivated successfully. Mar 17 17:49:10.574864 systemd[1]: Finished issuegen.service - Generate /run/issue. Mar 17 17:49:10.599990 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Mar 17 17:49:10.619292 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Mar 17 17:49:10.635375 systemd[1]: Started getty@tty1.service - Getty on tty1. Mar 17 17:49:10.652518 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Mar 17 17:49:10.654378 systemd[1]: Reached target getty.target - Login Prompts. Mar 17 17:49:10.692845 sshd[1710]: Accepted publickey for core from 139.178.89.65 port 37672 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:10.694504 sshd-session[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:10.709456 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Mar 17 17:49:10.722480 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Mar 17 17:49:10.732786 systemd-logind[1598]: New session 1 of user core. Mar 17 17:49:10.758640 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Mar 17 17:49:10.779931 systemd[1]: Starting user@500.service - User Manager for UID 500... Mar 17 17:49:10.793265 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Mar 17 17:49:10.936805 systemd[1727]: Queued start job for default target default.target. Mar 17 17:49:10.937607 systemd[1727]: Created slice app.slice - User Application Slice. Mar 17 17:49:10.937643 systemd[1727]: Reached target paths.target - Paths. Mar 17 17:49:10.937664 systemd[1727]: Reached target timers.target - Timers. Mar 17 17:49:10.947972 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Mar 17 17:49:10.965995 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Mar 17 17:49:10.966337 systemd[1727]: Reached target sockets.target - Sockets. Mar 17 17:49:10.966359 systemd[1727]: Reached target basic.target - Basic System. Mar 17 17:49:10.966436 systemd[1727]: Reached target default.target - Main User Target. Mar 17 17:49:10.966489 systemd[1727]: Startup finished in 159ms. Mar 17 17:49:10.969003 systemd[1]: Started user@500.service - User Manager for UID 500. Mar 17 17:49:10.977405 systemd[1]: Started session-1.scope - Session 1 of User core. Mar 17 17:49:11.056252 systemd[1]: Started sshd@1-209.38.135.89:22-139.178.89.65:37960.service - OpenSSH per-connection server daemon (139.178.89.65:37960). Mar 17 17:49:11.165563 sshd[1739]: Accepted publickey for core from 139.178.89.65 port 37960 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:11.166723 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:11.174621 systemd-logind[1598]: New session 2 of user core. Mar 17 17:49:11.187908 systemd[1]: Started session-2.scope - Session 2 of User core. Mar 17 17:49:11.266115 sshd[1742]: Connection closed by 139.178.89.65 port 37960 Mar 17 17:49:11.268395 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:11.281140 systemd[1]: Started sshd@2-209.38.135.89:22-139.178.89.65:37968.service - OpenSSH per-connection server daemon (139.178.89.65:37968). Mar 17 17:49:11.284456 systemd[1]: sshd@1-209.38.135.89:22-139.178.89.65:37960.service: Deactivated successfully. Mar 17 17:49:11.294220 systemd-logind[1598]: Session 2 logged out. Waiting for processes to exit. Mar 17 17:49:11.296510 systemd[1]: session-2.scope: Deactivated successfully. Mar 17 17:49:11.299561 systemd-logind[1598]: Removed session 2. Mar 17 17:49:11.362570 sshd[1744]: Accepted publickey for core from 139.178.89.65 port 37968 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:11.364284 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:11.372914 systemd-logind[1598]: New session 3 of user core. Mar 17 17:49:11.379259 systemd[1]: Started session-3.scope - Session 3 of User core. Mar 17 17:49:11.470807 sshd[1750]: Connection closed by 139.178.89.65 port 37968 Mar 17 17:49:11.470608 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:11.479417 systemd[1]: sshd@2-209.38.135.89:22-139.178.89.65:37968.service: Deactivated successfully. Mar 17 17:49:11.491097 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:11.503262 systemd[1]: session-3.scope: Deactivated successfully. Mar 17 17:49:11.503850 (kubelet)[1762]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:49:11.506916 systemd-logind[1598]: Session 3 logged out. Waiting for processes to exit. Mar 17 17:49:11.509100 systemd[1]: Reached target multi-user.target - Multi-User System. Mar 17 17:49:11.515203 systemd[1]: Startup finished in 7.081s (kernel) + 7.372s (userspace) = 14.454s. Mar 17 17:49:11.520548 systemd-logind[1598]: Removed session 3. Mar 17 17:49:12.382858 kubelet[1762]: E0317 17:49:12.382796 1762 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:49:12.387495 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:49:12.389511 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:49:21.483289 systemd[1]: Started sshd@3-209.38.135.89:22-139.178.89.65:33862.service - OpenSSH per-connection server daemon (139.178.89.65:33862). Mar 17 17:49:21.537163 sshd[1776]: Accepted publickey for core from 139.178.89.65 port 33862 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:21.539374 sshd-session[1776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:21.548503 systemd-logind[1598]: New session 4 of user core. Mar 17 17:49:21.553530 systemd[1]: Started session-4.scope - Session 4 of User core. Mar 17 17:49:21.620585 sshd[1779]: Connection closed by 139.178.89.65 port 33862 Mar 17 17:49:21.621039 sshd-session[1776]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:21.630255 systemd[1]: Started sshd@4-209.38.135.89:22-139.178.89.65:33864.service - OpenSSH per-connection server daemon (139.178.89.65:33864). Mar 17 17:49:21.632647 systemd[1]: sshd@3-209.38.135.89:22-139.178.89.65:33862.service: Deactivated successfully. Mar 17 17:49:21.641191 systemd[1]: session-4.scope: Deactivated successfully. Mar 17 17:49:21.642535 systemd-logind[1598]: Session 4 logged out. Waiting for processes to exit. Mar 17 17:49:21.647057 systemd-logind[1598]: Removed session 4. Mar 17 17:49:21.689608 sshd[1781]: Accepted publickey for core from 139.178.89.65 port 33864 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:21.691538 sshd-session[1781]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:21.698226 systemd-logind[1598]: New session 5 of user core. Mar 17 17:49:21.705435 systemd[1]: Started session-5.scope - Session 5 of User core. Mar 17 17:49:21.768362 sshd[1787]: Connection closed by 139.178.89.65 port 33864 Mar 17 17:49:21.769225 sshd-session[1781]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:21.783512 systemd[1]: Started sshd@5-209.38.135.89:22-139.178.89.65:33870.service - OpenSSH per-connection server daemon (139.178.89.65:33870). Mar 17 17:49:21.784493 systemd[1]: sshd@4-209.38.135.89:22-139.178.89.65:33864.service: Deactivated successfully. Mar 17 17:49:21.793196 systemd[1]: session-5.scope: Deactivated successfully. Mar 17 17:49:21.796838 systemd-logind[1598]: Session 5 logged out. Waiting for processes to exit. Mar 17 17:49:21.799041 systemd-logind[1598]: Removed session 5. Mar 17 17:49:21.854264 sshd[1789]: Accepted publickey for core from 139.178.89.65 port 33870 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:21.857152 sshd-session[1789]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:21.865067 systemd-logind[1598]: New session 6 of user core. Mar 17 17:49:21.873444 systemd[1]: Started session-6.scope - Session 6 of User core. Mar 17 17:49:21.943264 sshd[1795]: Connection closed by 139.178.89.65 port 33870 Mar 17 17:49:21.943765 sshd-session[1789]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:21.956228 systemd[1]: Started sshd@6-209.38.135.89:22-139.178.89.65:33878.service - OpenSSH per-connection server daemon (139.178.89.65:33878). Mar 17 17:49:21.957376 systemd[1]: sshd@5-209.38.135.89:22-139.178.89.65:33870.service: Deactivated successfully. Mar 17 17:49:21.961881 systemd[1]: session-6.scope: Deactivated successfully. Mar 17 17:49:21.964218 systemd-logind[1598]: Session 6 logged out. Waiting for processes to exit. Mar 17 17:49:21.967716 systemd-logind[1598]: Removed session 6. Mar 17 17:49:22.013335 sshd[1798]: Accepted publickey for core from 139.178.89.65 port 33878 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:22.015521 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:22.022462 systemd-logind[1598]: New session 7 of user core. Mar 17 17:49:22.033472 systemd[1]: Started session-7.scope - Session 7 of User core. Mar 17 17:49:22.110524 sudo[1804]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Mar 17 17:49:22.111464 sudo[1804]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:22.129946 sudo[1804]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:22.133927 sshd[1803]: Connection closed by 139.178.89.65 port 33878 Mar 17 17:49:22.136030 sshd-session[1798]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:22.149280 systemd[1]: Started sshd@7-209.38.135.89:22-139.178.89.65:33888.service - OpenSSH per-connection server daemon (139.178.89.65:33888). Mar 17 17:49:22.150080 systemd[1]: sshd@6-209.38.135.89:22-139.178.89.65:33878.service: Deactivated successfully. Mar 17 17:49:22.157829 systemd[1]: session-7.scope: Deactivated successfully. Mar 17 17:49:22.159395 systemd-logind[1598]: Session 7 logged out. Waiting for processes to exit. Mar 17 17:49:22.161155 systemd-logind[1598]: Removed session 7. Mar 17 17:49:22.205404 sshd[1806]: Accepted publickey for core from 139.178.89.65 port 33888 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:22.207398 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:22.213768 systemd-logind[1598]: New session 8 of user core. Mar 17 17:49:22.221499 systemd[1]: Started session-8.scope - Session 8 of User core. Mar 17 17:49:22.287577 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Mar 17 17:49:22.288456 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:22.295935 sudo[1814]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:22.306282 sudo[1813]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Mar 17 17:49:22.306861 sudo[1813]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:22.335313 systemd[1]: Starting audit-rules.service - Load Audit Rules... Mar 17 17:49:22.378145 augenrules[1836]: No rules Mar 17 17:49:22.380351 systemd[1]: audit-rules.service: Deactivated successfully. Mar 17 17:49:22.382176 systemd[1]: Finished audit-rules.service - Load Audit Rules. Mar 17 17:49:22.385320 sudo[1813]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:22.389429 sshd[1812]: Connection closed by 139.178.89.65 port 33888 Mar 17 17:49:22.390387 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:22.394333 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Mar 17 17:49:22.396008 systemd[1]: sshd@7-209.38.135.89:22-139.178.89.65:33888.service: Deactivated successfully. Mar 17 17:49:22.397954 systemd-logind[1598]: Session 8 logged out. Waiting for processes to exit. Mar 17 17:49:22.403462 systemd[1]: session-8.scope: Deactivated successfully. Mar 17 17:49:22.412036 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:22.416091 systemd[1]: Started sshd@8-209.38.135.89:22-139.178.89.65:33902.service - OpenSSH per-connection server daemon (139.178.89.65:33902). Mar 17 17:49:22.416959 systemd-logind[1598]: Removed session 8. Mar 17 17:49:22.496743 sshd[1846]: Accepted publickey for core from 139.178.89.65 port 33902 ssh2: RSA SHA256:gH0+Q3yFFjATMce1+bwL+7cSY2TIJcZ2QSBtpu9d9Wk Mar 17 17:49:22.498241 sshd-session[1846]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Mar 17 17:49:22.506752 systemd-logind[1598]: New session 9 of user core. Mar 17 17:49:22.515285 systemd[1]: Started session-9.scope - Session 9 of User core. Mar 17 17:49:22.590774 sudo[1855]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Mar 17 17:49:22.591874 sudo[1855]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Mar 17 17:49:22.634096 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:22.645447 (kubelet)[1872]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Mar 17 17:49:22.723531 kubelet[1872]: E0317 17:49:22.723331 1872 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Mar 17 17:49:22.728061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Mar 17 17:49:22.728268 systemd[1]: kubelet.service: Failed with result 'exit-code'. Mar 17 17:49:23.569273 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:23.590166 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:23.627757 systemd[1]: Reloading requested from client PID 1908 ('systemctl') (unit session-9.scope)... Mar 17 17:49:23.627988 systemd[1]: Reloading... Mar 17 17:49:23.811741 zram_generator::config[1947]: No configuration found. Mar 17 17:49:23.998659 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Mar 17 17:49:24.100872 systemd[1]: Reloading finished in 472 ms. Mar 17 17:49:24.178028 systemd[1]: kubelet.service: Deactivated successfully. Mar 17 17:49:24.179019 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:24.191630 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Mar 17 17:49:24.379988 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Mar 17 17:49:24.394531 (kubelet)[2013]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Mar 17 17:49:24.467075 kubelet[2013]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:49:24.467075 kubelet[2013]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Mar 17 17:49:24.467075 kubelet[2013]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Mar 17 17:49:24.468717 kubelet[2013]: I0317 17:49:24.468565 2013 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Mar 17 17:49:24.835517 kubelet[2013]: I0317 17:49:24.834847 2013 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Mar 17 17:49:24.835517 kubelet[2013]: I0317 17:49:24.834897 2013 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Mar 17 17:49:24.835517 kubelet[2013]: I0317 17:49:24.835255 2013 server.go:927] "Client rotation is on, will bootstrap in background" Mar 17 17:49:24.857297 kubelet[2013]: I0317 17:49:24.856956 2013 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Mar 17 17:49:24.873218 kubelet[2013]: I0317 17:49:24.873122 2013 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Mar 17 17:49:24.873924 kubelet[2013]: I0317 17:49:24.873668 2013 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Mar 17 17:49:24.874709 kubelet[2013]: I0317 17:49:24.873756 2013 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"209.38.135.89","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Mar 17 17:49:24.874709 kubelet[2013]: I0317 17:49:24.874061 2013 topology_manager.go:138] "Creating topology manager with none policy" Mar 17 17:49:24.874709 kubelet[2013]: I0317 17:49:24.874073 2013 container_manager_linux.go:301] "Creating device plugin manager" Mar 17 17:49:24.874709 kubelet[2013]: I0317 17:49:24.874239 2013 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:49:24.875766 kubelet[2013]: I0317 17:49:24.875434 2013 kubelet.go:400] "Attempting to sync node with API server" Mar 17 17:49:24.875766 kubelet[2013]: I0317 17:49:24.875466 2013 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Mar 17 17:49:24.875766 kubelet[2013]: I0317 17:49:24.875498 2013 kubelet.go:312] "Adding apiserver pod source" Mar 17 17:49:24.875766 kubelet[2013]: I0317 17:49:24.875520 2013 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Mar 17 17:49:24.879661 kubelet[2013]: E0317 17:49:24.879613 2013 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:24.879888 kubelet[2013]: E0317 17:49:24.879873 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:24.881319 kubelet[2013]: I0317 17:49:24.881177 2013 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v1.7.23" apiVersion="v1" Mar 17 17:49:24.883100 kubelet[2013]: I0317 17:49:24.883026 2013 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Mar 17 17:49:24.883189 kubelet[2013]: W0317 17:49:24.883178 2013 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Mar 17 17:49:24.884742 kubelet[2013]: I0317 17:49:24.884715 2013 server.go:1264] "Started kubelet" Mar 17 17:49:24.888604 kubelet[2013]: I0317 17:49:24.887912 2013 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Mar 17 17:49:24.890825 kubelet[2013]: I0317 17:49:24.890757 2013 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Mar 17 17:49:24.892452 kubelet[2013]: I0317 17:49:24.892422 2013 server.go:455] "Adding debug handlers to kubelet server" Mar 17 17:49:24.893990 kubelet[2013]: I0317 17:49:24.893919 2013 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Mar 17 17:49:24.894516 kubelet[2013]: I0317 17:49:24.894490 2013 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Mar 17 17:49:24.899063 kubelet[2013]: I0317 17:49:24.899019 2013 volume_manager.go:291] "Starting Kubelet Volume Manager" Mar 17 17:49:24.900767 kubelet[2013]: I0317 17:49:24.900735 2013 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Mar 17 17:49:24.904026 kubelet[2013]: I0317 17:49:24.901363 2013 reconciler.go:26] "Reconciler: start to sync state" Mar 17 17:49:24.904886 kubelet[2013]: I0317 17:49:24.904842 2013 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Mar 17 17:49:24.909807 kubelet[2013]: E0317 17:49:24.909725 2013 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Mar 17 17:49:24.910600 kubelet[2013]: I0317 17:49:24.910450 2013 factory.go:221] Registration of the containerd container factory successfully Mar 17 17:49:24.910600 kubelet[2013]: I0317 17:49:24.910478 2013 factory.go:221] Registration of the systemd container factory successfully Mar 17 17:49:24.950795 kubelet[2013]: W0317 17:49:24.949321 2013 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:49:24.950795 kubelet[2013]: E0317 17:49:24.949381 2013 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Mar 17 17:49:24.950795 kubelet[2013]: W0317 17:49:24.949455 2013 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes "209.38.135.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:49:24.950795 kubelet[2013]: E0317 17:49:24.949471 2013 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes "209.38.135.89" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Mar 17 17:49:24.950795 kubelet[2013]: E0317 17:49:24.949522 2013 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"209.38.135.89\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Mar 17 17:49:24.951326 kubelet[2013]: E0317 17:49:24.949714 2013 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{209.38.135.89.182da860dfa4203e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:209.38.135.89,UID:209.38.135.89,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:209.38.135.89,},FirstTimestamp:2025-03-17 17:49:24.884652094 +0000 UTC m=+0.481419586,LastTimestamp:2025-03-17 17:49:24.884652094 +0000 UTC m=+0.481419586,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:209.38.135.89,}" Mar 17 17:49:24.951326 kubelet[2013]: W0317 17:49:24.950164 2013 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:49:24.951326 kubelet[2013]: E0317 17:49:24.950209 2013 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Mar 17 17:49:24.952094 kubelet[2013]: I0317 17:49:24.952061 2013 cpu_manager.go:214] "Starting CPU manager" policy="none" Mar 17 17:49:24.952094 kubelet[2013]: I0317 17:49:24.952088 2013 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Mar 17 17:49:24.952202 kubelet[2013]: I0317 17:49:24.952117 2013 state_mem.go:36] "Initialized new in-memory state store" Mar 17 17:49:24.954667 kubelet[2013]: E0317 17:49:24.954521 2013 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{209.38.135.89.182da860e121c37d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:209.38.135.89,UID:209.38.135.89,APIVersion:,ResourceVersion:,FieldPath:,},Reason:InvalidDiskCapacity,Message:invalid capacity 0 on image filesystem,Source:EventSource{Component:kubelet,Host:209.38.135.89,},FirstTimestamp:2025-03-17 17:49:24.909663101 +0000 UTC m=+0.506430600,LastTimestamp:2025-03-17 17:49:24.909663101 +0000 UTC m=+0.506430600,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:209.38.135.89,}" Mar 17 17:49:24.958201 kubelet[2013]: I0317 17:49:24.958010 2013 policy_none.go:49] "None policy: Start" Mar 17 17:49:24.961237 kubelet[2013]: I0317 17:49:24.960604 2013 memory_manager.go:170] "Starting memorymanager" policy="None" Mar 17 17:49:24.961237 kubelet[2013]: I0317 17:49:24.960666 2013 state_mem.go:35] "Initializing new in-memory state store" Mar 17 17:49:24.971913 kubelet[2013]: I0317 17:49:24.969233 2013 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Mar 17 17:49:24.971913 kubelet[2013]: I0317 17:49:24.969623 2013 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Mar 17 17:49:24.971913 kubelet[2013]: I0317 17:49:24.969868 2013 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Mar 17 17:49:24.983335 kubelet[2013]: E0317 17:49:24.981533 2013 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{209.38.135.89.182da860e396a339 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:209.38.135.89,UID:209.38.135.89,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node 209.38.135.89 status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:209.38.135.89,},FirstTimestamp:2025-03-17 17:49:24.950876985 +0000 UTC m=+0.547644469,LastTimestamp:2025-03-17 17:49:24.950876985 +0000 UTC m=+0.547644469,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:209.38.135.89,}" Mar 17 17:49:24.984054 kubelet[2013]: E0317 17:49:24.984021 2013 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"209.38.135.89\" not found" Mar 17 17:49:25.001949 kubelet[2013]: I0317 17:49:25.001874 2013 kubelet_node_status.go:73] "Attempting to register node" node="209.38.135.89" Mar 17 17:49:25.003026 kubelet[2013]: I0317 17:49:25.002859 2013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Mar 17 17:49:25.005201 kubelet[2013]: I0317 17:49:25.005150 2013 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Mar 17 17:49:25.005201 kubelet[2013]: I0317 17:49:25.005189 2013 status_manager.go:217] "Starting to sync pod status with apiserver" Mar 17 17:49:25.005201 kubelet[2013]: I0317 17:49:25.005218 2013 kubelet.go:2337] "Starting kubelet main sync loop" Mar 17 17:49:25.005476 kubelet[2013]: E0317 17:49:25.005278 2013 kubelet.go:2361] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Mar 17 17:49:25.038876 kubelet[2013]: E0317 17:49:25.038810 2013 kubelet_node_status.go:96] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="209.38.135.89" Mar 17 17:49:25.042867 kubelet[2013]: W0317 17:49:25.042741 2013 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 17 17:49:25.042867 kubelet[2013]: E0317 17:49:25.042808 2013 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Mar 17 17:49:25.059586 kubelet[2013]: E0317 17:49:25.059145 2013 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{209.38.135.89.182da860e39714de default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:209.38.135.89,UID:209.38.135.89,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node 209.38.135.89 status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:209.38.135.89,},FirstTimestamp:2025-03-17 17:49:24.950906078 +0000 UTC m=+0.547673562,LastTimestamp:2025-03-17 17:49:24.950906078 +0000 UTC m=+0.547673562,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:209.38.135.89,}" Mar 17 17:49:25.179212 kubelet[2013]: E0317 17:49:25.178988 2013 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"209.38.135.89\" not found" node="209.38.135.89" Mar 17 17:49:25.240619 kubelet[2013]: I0317 17:49:25.240046 2013 kubelet_node_status.go:73] "Attempting to register node" node="209.38.135.89" Mar 17 17:49:25.252556 kubelet[2013]: I0317 17:49:25.252449 2013 kubelet_node_status.go:76] "Successfully registered node" node="209.38.135.89" Mar 17 17:49:25.354431 kubelet[2013]: E0317 17:49:25.354384 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:25.456036 kubelet[2013]: E0317 17:49:25.455787 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:25.475836 sudo[1855]: pam_unix(sudo:session): session closed for user root Mar 17 17:49:25.479477 sshd[1852]: Connection closed by 139.178.89.65 port 33902 Mar 17 17:49:25.480511 sshd-session[1846]: pam_unix(sshd:session): session closed for user core Mar 17 17:49:25.487497 systemd[1]: sshd@8-209.38.135.89:22-139.178.89.65:33902.service: Deactivated successfully. Mar 17 17:49:25.492960 systemd[1]: session-9.scope: Deactivated successfully. Mar 17 17:49:25.493298 systemd-logind[1598]: Session 9 logged out. Waiting for processes to exit. Mar 17 17:49:25.496379 systemd-logind[1598]: Removed session 9. Mar 17 17:49:25.556762 kubelet[2013]: E0317 17:49:25.556625 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:25.656991 kubelet[2013]: E0317 17:49:25.656889 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:25.758263 kubelet[2013]: E0317 17:49:25.758084 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:25.839371 kubelet[2013]: I0317 17:49:25.839267 2013 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Mar 17 17:49:25.858784 kubelet[2013]: E0317 17:49:25.858670 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:25.881130 kubelet[2013]: E0317 17:49:25.881060 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:25.959599 kubelet[2013]: E0317 17:49:25.959482 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:26.059873 kubelet[2013]: E0317 17:49:26.059709 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:26.160401 kubelet[2013]: E0317 17:49:26.160297 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:26.261522 kubelet[2013]: E0317 17:49:26.261420 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:26.361889 kubelet[2013]: E0317 17:49:26.361665 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:26.462712 kubelet[2013]: E0317 17:49:26.462630 2013 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"209.38.135.89\" not found" Mar 17 17:49:26.565116 kubelet[2013]: I0317 17:49:26.564871 2013 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Mar 17 17:49:26.566340 containerd[1626]: time="2025-03-17T17:49:26.566194647Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Mar 17 17:49:26.567562 kubelet[2013]: I0317 17:49:26.566921 2013 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Mar 17 17:49:26.881372 kubelet[2013]: E0317 17:49:26.881293 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:26.881621 kubelet[2013]: I0317 17:49:26.881412 2013 apiserver.go:52] "Watching apiserver" Mar 17 17:49:26.904956 kubelet[2013]: I0317 17:49:26.904876 2013 topology_manager.go:215] "Topology Admit Handler" podUID="32ef33a8-8cbf-4882-bf99-d638f6b1ed85" podNamespace="calico-system" podName="calico-node-6mh4d" Mar 17 17:49:26.905178 kubelet[2013]: I0317 17:49:26.905049 2013 topology_manager.go:215] "Topology Admit Handler" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" podNamespace="calico-system" podName="csi-node-driver-wnlj5" Mar 17 17:49:26.905178 kubelet[2013]: I0317 17:49:26.905141 2013 topology_manager.go:215] "Topology Admit Handler" podUID="3844c25e-5442-4c0d-82dd-25b295fca7c8" podNamespace="kube-system" podName="kube-proxy-nj9x2" Mar 17 17:49:26.906661 kubelet[2013]: E0317 17:49:26.906381 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:27.005135 kubelet[2013]: I0317 17:49:27.005051 2013 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Mar 17 17:49:27.015158 kubelet[2013]: I0317 17:49:27.015072 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-cni-log-dir\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015158 kubelet[2013]: I0317 17:49:27.015129 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/a99e263c-7608-426a-abb5-cac9dbd7d1b7-kubelet-dir\") pod \"csi-node-driver-wnlj5\" (UID: \"a99e263c-7608-426a-abb5-cac9dbd7d1b7\") " pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:27.015158 kubelet[2013]: I0317 17:49:27.015149 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3844c25e-5442-4c0d-82dd-25b295fca7c8-lib-modules\") pod \"kube-proxy-nj9x2\" (UID: \"3844c25e-5442-4c0d-82dd-25b295fca7c8\") " pod="kube-system/kube-proxy-nj9x2" Mar 17 17:49:27.015158 kubelet[2013]: I0317 17:49:27.015165 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-lib-modules\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015158 kubelet[2013]: I0317 17:49:27.015184 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-xtables-lock\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015578 kubelet[2013]: I0317 17:49:27.015206 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-var-run-calico\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015578 kubelet[2013]: I0317 17:49:27.015222 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-cni-net-dir\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015578 kubelet[2013]: I0317 17:49:27.015237 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a99e263c-7608-426a-abb5-cac9dbd7d1b7-socket-dir\") pod \"csi-node-driver-wnlj5\" (UID: \"a99e263c-7608-426a-abb5-cac9dbd7d1b7\") " pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:27.015578 kubelet[2013]: I0317 17:49:27.015253 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mvcr\" (UniqueName: \"kubernetes.io/projected/3844c25e-5442-4c0d-82dd-25b295fca7c8-kube-api-access-8mvcr\") pod \"kube-proxy-nj9x2\" (UID: \"3844c25e-5442-4c0d-82dd-25b295fca7c8\") " pod="kube-system/kube-proxy-nj9x2" Mar 17 17:49:27.015578 kubelet[2013]: I0317 17:49:27.015271 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-policysync\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015780 kubelet[2013]: I0317 17:49:27.015288 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-node-certs\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015780 kubelet[2013]: I0317 17:49:27.015305 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-cni-bin-dir\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015780 kubelet[2013]: I0317 17:49:27.015319 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z2kw7\" (UniqueName: \"kubernetes.io/projected/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-kube-api-access-z2kw7\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015780 kubelet[2013]: I0317 17:49:27.015335 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a99e263c-7608-426a-abb5-cac9dbd7d1b7-registration-dir\") pod \"csi-node-driver-wnlj5\" (UID: \"a99e263c-7608-426a-abb5-cac9dbd7d1b7\") " pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:27.015780 kubelet[2013]: I0317 17:49:27.015351 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3844c25e-5442-4c0d-82dd-25b295fca7c8-xtables-lock\") pod \"kube-proxy-nj9x2\" (UID: \"3844c25e-5442-4c0d-82dd-25b295fca7c8\") " pod="kube-system/kube-proxy-nj9x2" Mar 17 17:49:27.015904 kubelet[2013]: I0317 17:49:27.015368 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-tigera-ca-bundle\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015904 kubelet[2013]: I0317 17:49:27.015384 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-var-lib-calico\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015904 kubelet[2013]: I0317 17:49:27.015402 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/32ef33a8-8cbf-4882-bf99-d638f6b1ed85-flexvol-driver-host\") pod \"calico-node-6mh4d\" (UID: \"32ef33a8-8cbf-4882-bf99-d638f6b1ed85\") " pod="calico-system/calico-node-6mh4d" Mar 17 17:49:27.015904 kubelet[2013]: I0317 17:49:27.015419 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/a99e263c-7608-426a-abb5-cac9dbd7d1b7-varrun\") pod \"csi-node-driver-wnlj5\" (UID: \"a99e263c-7608-426a-abb5-cac9dbd7d1b7\") " pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:27.015904 kubelet[2013]: I0317 17:49:27.015440 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9hphf\" (UniqueName: \"kubernetes.io/projected/a99e263c-7608-426a-abb5-cac9dbd7d1b7-kube-api-access-9hphf\") pod \"csi-node-driver-wnlj5\" (UID: \"a99e263c-7608-426a-abb5-cac9dbd7d1b7\") " pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:27.016019 kubelet[2013]: I0317 17:49:27.015461 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3844c25e-5442-4c0d-82dd-25b295fca7c8-kube-proxy\") pod \"kube-proxy-nj9x2\" (UID: \"3844c25e-5442-4c0d-82dd-25b295fca7c8\") " pod="kube-system/kube-proxy-nj9x2" Mar 17 17:49:27.127201 kubelet[2013]: E0317 17:49:27.127036 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:27.127201 kubelet[2013]: W0317 17:49:27.127082 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:27.127201 kubelet[2013]: E0317 17:49:27.127124 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:27.148212 kubelet[2013]: E0317 17:49:27.148076 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:27.148476 kubelet[2013]: W0317 17:49:27.148113 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:27.148476 kubelet[2013]: E0317 17:49:27.148400 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:27.149561 kubelet[2013]: E0317 17:49:27.149535 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:27.149561 kubelet[2013]: W0317 17:49:27.149558 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:27.149775 kubelet[2013]: E0317 17:49:27.149583 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:27.158082 kubelet[2013]: E0317 17:49:27.158016 2013 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Mar 17 17:49:27.158082 kubelet[2013]: W0317 17:49:27.158041 2013 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Mar 17 17:49:27.158509 kubelet[2013]: E0317 17:49:27.158316 2013 plugins.go:730] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Mar 17 17:49:27.210876 kubelet[2013]: E0317 17:49:27.210708 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:27.210876 kubelet[2013]: E0317 17:49:27.210742 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:27.212060 containerd[1626]: time="2025-03-17T17:49:27.211999729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nj9x2,Uid:3844c25e-5442-4c0d-82dd-25b295fca7c8,Namespace:kube-system,Attempt:0,}" Mar 17 17:49:27.212580 containerd[1626]: time="2025-03-17T17:49:27.212290821Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6mh4d,Uid:32ef33a8-8cbf-4882-bf99-d638f6b1ed85,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:27.745264 containerd[1626]: time="2025-03-17T17:49:27.745138117Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:49:27.747051 containerd[1626]: time="2025-03-17T17:49:27.746980987Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312056" Mar 17 17:49:27.747762 containerd[1626]: time="2025-03-17T17:49:27.747546863Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:49:27.749000 containerd[1626]: time="2025-03-17T17:49:27.748721468Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:49:27.749000 containerd[1626]: time="2025-03-17T17:49:27.748939536Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Mar 17 17:49:27.754057 containerd[1626]: time="2025-03-17T17:49:27.753981946Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Mar 17 17:49:27.755715 containerd[1626]: time="2025-03-17T17:49:27.755395520Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 543.252814ms" Mar 17 17:49:27.756978 containerd[1626]: time="2025-03-17T17:49:27.756898530Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 544.465483ms" Mar 17 17:49:27.881939 kubelet[2013]: E0317 17:49:27.881795 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:27.950317 containerd[1626]: time="2025-03-17T17:49:27.945814568Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:27.950317 containerd[1626]: time="2025-03-17T17:49:27.949271728Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:27.950317 containerd[1626]: time="2025-03-17T17:49:27.949300621Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:27.950317 containerd[1626]: time="2025-03-17T17:49:27.949456062Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:27.950317 containerd[1626]: time="2025-03-17T17:49:27.949117028Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:27.950317 containerd[1626]: time="2025-03-17T17:49:27.949221414Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:27.950317 containerd[1626]: time="2025-03-17T17:49:27.949259555Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:27.950317 containerd[1626]: time="2025-03-17T17:49:27.949403834Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:28.133165 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642859807.mount: Deactivated successfully. Mar 17 17:49:28.144168 containerd[1626]: time="2025-03-17T17:49:28.144026614Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6mh4d,Uid:32ef33a8-8cbf-4882-bf99-d638f6b1ed85,Namespace:calico-system,Attempt:0,} returns sandbox id \"77a29713031006806b365cdd1379bd831c516241aa7f17e6a0d86822c80942c2\"" Mar 17 17:49:28.146726 kubelet[2013]: E0317 17:49:28.146613 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:28.150980 containerd[1626]: time="2025-03-17T17:49:28.150814094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-nj9x2,Uid:3844c25e-5442-4c0d-82dd-25b295fca7c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"14df9f0582d473e92b173ba8790a1fb10dd9d3078aea0fff243b5e420709b91d\"" Mar 17 17:49:28.154073 containerd[1626]: time="2025-03-17T17:49:28.153546671Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\"" Mar 17 17:49:28.154240 kubelet[2013]: E0317 17:49:28.153977 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:28.882546 kubelet[2013]: E0317 17:49:28.882479 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:29.007853 kubelet[2013]: E0317 17:49:29.007311 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:29.572547 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3074017291.mount: Deactivated successfully. Mar 17 17:49:29.706133 containerd[1626]: time="2025-03-17T17:49:29.705975609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:29.707405 containerd[1626]: time="2025-03-17T17:49:29.707274771Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2: active requests=0, bytes read=6857253" Mar 17 17:49:29.708534 containerd[1626]: time="2025-03-17T17:49:29.708069586Z" level=info msg="ImageCreate event name:\"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:29.711596 containerd[1626]: time="2025-03-17T17:49:29.711502622Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:29.712640 containerd[1626]: time="2025-03-17T17:49:29.712582677Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" with image id \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:51d9341a4a37e278a906f40ecc73f5076e768612c21621f1b1d4f2b2f0735a1d\", size \"6857075\" in 1.558958729s" Mar 17 17:49:29.713148 containerd[1626]: time="2025-03-17T17:49:29.712910484Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.29.2\" returns image reference \"sha256:441bf8ace5b7fa3742b7fafaf6cd60fea340dd307169a18c75a1d78cba3a8365\"" Mar 17 17:49:29.716241 containerd[1626]: time="2025-03-17T17:49:29.716182699Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\"" Mar 17 17:49:29.719334 containerd[1626]: time="2025-03-17T17:49:29.718886647Z" level=info msg="CreateContainer within sandbox \"77a29713031006806b365cdd1379bd831c516241aa7f17e6a0d86822c80942c2\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Mar 17 17:49:29.738367 containerd[1626]: time="2025-03-17T17:49:29.738297592Z" level=info msg="CreateContainer within sandbox \"77a29713031006806b365cdd1379bd831c516241aa7f17e6a0d86822c80942c2\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b5dd0b7fbfd5d7ccc33d23eca17a6654bae8e1c0585dd53c576ad0e59c67aae8\"" Mar 17 17:49:29.740722 containerd[1626]: time="2025-03-17T17:49:29.739999802Z" level=info msg="StartContainer for \"b5dd0b7fbfd5d7ccc33d23eca17a6654bae8e1c0585dd53c576ad0e59c67aae8\"" Mar 17 17:49:29.846307 containerd[1626]: time="2025-03-17T17:49:29.846120334Z" level=info msg="StartContainer for \"b5dd0b7fbfd5d7ccc33d23eca17a6654bae8e1c0585dd53c576ad0e59c67aae8\" returns successfully" Mar 17 17:49:29.883485 kubelet[2013]: E0317 17:49:29.883379 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:29.913653 containerd[1626]: time="2025-03-17T17:49:29.913575809Z" level=info msg="shim disconnected" id=b5dd0b7fbfd5d7ccc33d23eca17a6654bae8e1c0585dd53c576ad0e59c67aae8 namespace=k8s.io Mar 17 17:49:29.913985 containerd[1626]: time="2025-03-17T17:49:29.913707709Z" level=warning msg="cleaning up after shim disconnected" id=b5dd0b7fbfd5d7ccc33d23eca17a6654bae8e1c0585dd53c576ad0e59c67aae8 namespace=k8s.io Mar 17 17:49:29.913985 containerd[1626]: time="2025-03-17T17:49:29.913723413Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:30.058112 kubelet[2013]: E0317 17:49:30.058053 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:30.520804 systemd[1]: run-containerd-runc-k8s.io-b5dd0b7fbfd5d7ccc33d23eca17a6654bae8e1c0585dd53c576ad0e59c67aae8-runc.59x1v8.mount: Deactivated successfully. Mar 17 17:49:30.521099 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5dd0b7fbfd5d7ccc33d23eca17a6654bae8e1c0585dd53c576ad0e59c67aae8-rootfs.mount: Deactivated successfully. Mar 17 17:49:30.884634 kubelet[2013]: E0317 17:49:30.884476 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:31.006726 kubelet[2013]: E0317 17:49:31.005654 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:31.080530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1921644106.mount: Deactivated successfully. Mar 17 17:49:31.726772 containerd[1626]: time="2025-03-17T17:49:31.725708709Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:31.728098 containerd[1626]: time="2025-03-17T17:49:31.728007831Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.11: active requests=0, bytes read=29185372" Mar 17 17:49:31.728948 containerd[1626]: time="2025-03-17T17:49:31.728861710Z" level=info msg="ImageCreate event name:\"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:31.731955 containerd[1626]: time="2025-03-17T17:49:31.731869095Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:31.733065 containerd[1626]: time="2025-03-17T17:49:31.732843521Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.11\" with image id \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\", repo tag \"registry.k8s.io/kube-proxy:v1.30.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea4da798040a18ed3f302e8d5f67307c7275a2a53bcf3d51bcec223acda84a55\", size \"29184391\" in 2.016610447s" Mar 17 17:49:31.733065 containerd[1626]: time="2025-03-17T17:49:31.732894294Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.11\" returns image reference \"sha256:01045f200a8856c3f5ccfa7be03d72274f1f16fc7a047659e709d603d5c019dc\"" Mar 17 17:49:31.737520 containerd[1626]: time="2025-03-17T17:49:31.737348010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\"" Mar 17 17:49:31.740289 containerd[1626]: time="2025-03-17T17:49:31.739433702Z" level=info msg="CreateContainer within sandbox \"14df9f0582d473e92b173ba8790a1fb10dd9d3078aea0fff243b5e420709b91d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Mar 17 17:49:31.763781 containerd[1626]: time="2025-03-17T17:49:31.763717005Z" level=info msg="CreateContainer within sandbox \"14df9f0582d473e92b173ba8790a1fb10dd9d3078aea0fff243b5e420709b91d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"fb9b87039062e94e974b2cc2bd550629e60284de5b3c24ab6abbc94934b38bbc\"" Mar 17 17:49:31.765478 containerd[1626]: time="2025-03-17T17:49:31.764874191Z" level=info msg="StartContainer for \"fb9b87039062e94e974b2cc2bd550629e60284de5b3c24ab6abbc94934b38bbc\"" Mar 17 17:49:31.865122 containerd[1626]: time="2025-03-17T17:49:31.864982579Z" level=info msg="StartContainer for \"fb9b87039062e94e974b2cc2bd550629e60284de5b3c24ab6abbc94934b38bbc\" returns successfully" Mar 17 17:49:31.887384 kubelet[2013]: E0317 17:49:31.887181 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:32.064127 kubelet[2013]: E0317 17:49:32.063484 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:32.111503 kubelet[2013]: I0317 17:49:32.111104 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-nj9x2" podStartSLOduration=3.531555283 podStartE2EDuration="7.111052958s" podCreationTimestamp="2025-03-17 17:49:25 +0000 UTC" firstStartedPulling="2025-03-17 17:49:28.155608921 +0000 UTC m=+3.752376390" lastFinishedPulling="2025-03-17 17:49:31.735106594 +0000 UTC m=+7.331874065" observedRunningTime="2025-03-17 17:49:32.103641533 +0000 UTC m=+7.700409025" watchObservedRunningTime="2025-03-17 17:49:32.111052958 +0000 UTC m=+7.707820446" Mar 17 17:49:32.887840 kubelet[2013]: E0317 17:49:32.887778 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:33.006426 kubelet[2013]: E0317 17:49:33.006372 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:33.066954 kubelet[2013]: E0317 17:49:33.066909 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:33.888910 kubelet[2013]: E0317 17:49:33.888803 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:34.889201 kubelet[2013]: E0317 17:49:34.889127 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:35.008628 kubelet[2013]: E0317 17:49:35.007652 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:35.434130 containerd[1626]: time="2025-03-17T17:49:35.434075049Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:35.435221 containerd[1626]: time="2025-03-17T17:49:35.435021313Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.29.2: active requests=0, bytes read=97781477" Mar 17 17:49:35.435793 containerd[1626]: time="2025-03-17T17:49:35.435751601Z" level=info msg="ImageCreate event name:\"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:35.438566 containerd[1626]: time="2025-03-17T17:49:35.438144167Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:35.439040 containerd[1626]: time="2025-03-17T17:49:35.439006899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.29.2\" with image id \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\", repo tag \"ghcr.io/flatcar/calico/cni:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:890e1db6ae363695cfc23ffae4d612cc85cdd99d759bd539af6683969d0c3c25\", size \"99274581\" in 3.70160481s" Mar 17 17:49:35.439040 containerd[1626]: time="2025-03-17T17:49:35.439037575Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.29.2\" returns image reference \"sha256:cda13293c895a8a3b06c1e190b70fb6fe61036db2e59764036fc6e65ec374693\"" Mar 17 17:49:35.442036 containerd[1626]: time="2025-03-17T17:49:35.442002609Z" level=info msg="CreateContainer within sandbox \"77a29713031006806b365cdd1379bd831c516241aa7f17e6a0d86822c80942c2\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Mar 17 17:49:35.460497 containerd[1626]: time="2025-03-17T17:49:35.460332468Z" level=info msg="CreateContainer within sandbox \"77a29713031006806b365cdd1379bd831c516241aa7f17e6a0d86822c80942c2\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aa94eb64df35918363e61c370b1f2ddeab91952228a024277f7e755a20e23641\"" Mar 17 17:49:35.461319 containerd[1626]: time="2025-03-17T17:49:35.461208872Z" level=info msg="StartContainer for \"aa94eb64df35918363e61c370b1f2ddeab91952228a024277f7e755a20e23641\"" Mar 17 17:49:35.505531 systemd[1]: run-containerd-runc-k8s.io-aa94eb64df35918363e61c370b1f2ddeab91952228a024277f7e755a20e23641-runc.PlR13z.mount: Deactivated successfully. Mar 17 17:49:35.547740 containerd[1626]: time="2025-03-17T17:49:35.547634239Z" level=info msg="StartContainer for \"aa94eb64df35918363e61c370b1f2ddeab91952228a024277f7e755a20e23641\" returns successfully" Mar 17 17:49:35.890533 kubelet[2013]: E0317 17:49:35.889986 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:36.077645 kubelet[2013]: E0317 17:49:36.077582 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:36.226717 containerd[1626]: time="2025-03-17T17:49:36.226042229Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Mar 17 17:49:36.279257 containerd[1626]: time="2025-03-17T17:49:36.279180370Z" level=info msg="shim disconnected" id=aa94eb64df35918363e61c370b1f2ddeab91952228a024277f7e755a20e23641 namespace=k8s.io Mar 17 17:49:36.279575 containerd[1626]: time="2025-03-17T17:49:36.279317196Z" level=warning msg="cleaning up after shim disconnected" id=aa94eb64df35918363e61c370b1f2ddeab91952228a024277f7e755a20e23641 namespace=k8s.io Mar 17 17:49:36.279575 containerd[1626]: time="2025-03-17T17:49:36.279328465Z" level=info msg="cleaning up dead shim" namespace=k8s.io Mar 17 17:49:36.303694 kubelet[2013]: I0317 17:49:36.303496 2013 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Mar 17 17:49:36.451770 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa94eb64df35918363e61c370b1f2ddeab91952228a024277f7e755a20e23641-rootfs.mount: Deactivated successfully. Mar 17 17:49:36.890846 kubelet[2013]: E0317 17:49:36.890783 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:37.011170 containerd[1626]: time="2025-03-17T17:49:37.010742115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:0,}" Mar 17 17:49:37.081958 kubelet[2013]: E0317 17:49:37.081922 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:37.085491 containerd[1626]: time="2025-03-17T17:49:37.085439898Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\"" Mar 17 17:49:37.087371 systemd-resolved[1481]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.2. Mar 17 17:49:37.094996 containerd[1626]: time="2025-03-17T17:49:37.094923695Z" level=error msg="Failed to destroy network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:37.097810 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84-shm.mount: Deactivated successfully. Mar 17 17:49:37.099419 kubelet[2013]: E0317 17:49:37.098337 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:37.099419 kubelet[2013]: E0317 17:49:37.098418 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:37.099419 kubelet[2013]: E0317 17:49:37.098441 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:37.099628 containerd[1626]: time="2025-03-17T17:49:37.097975078Z" level=error msg="encountered an error cleaning up failed sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:37.099628 containerd[1626]: time="2025-03-17T17:49:37.098065364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:37.100182 kubelet[2013]: E0317 17:49:37.098497 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:37.891449 kubelet[2013]: E0317 17:49:37.891379 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:38.085012 kubelet[2013]: I0317 17:49:38.084948 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84" Mar 17 17:49:38.088707 containerd[1626]: time="2025-03-17T17:49:38.086067377Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:49:38.088707 containerd[1626]: time="2025-03-17T17:49:38.086377933Z" level=info msg="Ensure that sandbox 5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84 in task-service has been cleanup successfully" Mar 17 17:49:38.088759 systemd[1]: run-netns-cni\x2df8e4f063\x2d9537\x2daa8f\x2d09f8\x2db4ddff983d1c.mount: Deactivated successfully. Mar 17 17:49:38.090215 containerd[1626]: time="2025-03-17T17:49:38.089633074Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:49:38.090215 containerd[1626]: time="2025-03-17T17:49:38.090181484Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:49:38.091288 containerd[1626]: time="2025-03-17T17:49:38.090902970Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:1,}" Mar 17 17:49:38.178784 containerd[1626]: time="2025-03-17T17:49:38.178612141Z" level=error msg="Failed to destroy network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:38.179493 containerd[1626]: time="2025-03-17T17:49:38.179388414Z" level=error msg="encountered an error cleaning up failed sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:38.179831 containerd[1626]: time="2025-03-17T17:49:38.179656817Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:1,} failed, error" error="failed to setup network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:38.180476 kubelet[2013]: E0317 17:49:38.180104 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:38.180476 kubelet[2013]: E0317 17:49:38.180169 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:38.180476 kubelet[2013]: E0317 17:49:38.180198 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:38.180855 kubelet[2013]: E0317 17:49:38.180242 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:38.182448 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905-shm.mount: Deactivated successfully. Mar 17 17:49:38.892334 kubelet[2013]: E0317 17:49:38.892230 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:39.081706 kubelet[2013]: I0317 17:49:39.079280 2013 topology_manager.go:215] "Topology Admit Handler" podUID="70c97941-fbff-42a7-bee6-390922be5bb6" podNamespace="default" podName="nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:39.089150 kubelet[2013]: I0317 17:49:39.089109 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905" Mar 17 17:49:39.091732 containerd[1626]: time="2025-03-17T17:49:39.089740949Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:49:39.091732 containerd[1626]: time="2025-03-17T17:49:39.089953085Z" level=info msg="Ensure that sandbox f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905 in task-service has been cleanup successfully" Mar 17 17:49:39.093366 containerd[1626]: time="2025-03-17T17:49:39.092302642Z" level=info msg="TearDown network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" successfully" Mar 17 17:49:39.093366 containerd[1626]: time="2025-03-17T17:49:39.092334097Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" returns successfully" Mar 17 17:49:39.094036 systemd[1]: run-netns-cni\x2d5dbbca12\x2d1a4b\x2df848\x2dfa13\x2dbe9bbe9eb3d2.mount: Deactivated successfully. Mar 17 17:49:39.095709 containerd[1626]: time="2025-03-17T17:49:39.095141840Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:49:39.095709 containerd[1626]: time="2025-03-17T17:49:39.095257999Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:49:39.095709 containerd[1626]: time="2025-03-17T17:49:39.095269850Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:49:39.096772 containerd[1626]: time="2025-03-17T17:49:39.096566946Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:2,}" Mar 17 17:49:39.138112 kubelet[2013]: I0317 17:49:39.137915 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mwqln\" (UniqueName: \"kubernetes.io/projected/70c97941-fbff-42a7-bee6-390922be5bb6-kube-api-access-mwqln\") pod \"nginx-deployment-85f456d6dd-brdvj\" (UID: \"70c97941-fbff-42a7-bee6-390922be5bb6\") " pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:39.225414 containerd[1626]: time="2025-03-17T17:49:39.225243034Z" level=error msg="Failed to destroy network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:39.230969 containerd[1626]: time="2025-03-17T17:49:39.230083544Z" level=error msg="encountered an error cleaning up failed sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:39.230969 containerd[1626]: time="2025-03-17T17:49:39.230178522Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:2,} failed, error" error="failed to setup network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:39.230565 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d-shm.mount: Deactivated successfully. Mar 17 17:49:39.231592 kubelet[2013]: E0317 17:49:39.230436 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:39.231592 kubelet[2013]: E0317 17:49:39.230505 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:39.231592 kubelet[2013]: E0317 17:49:39.230529 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:39.231776 kubelet[2013]: E0317 17:49:39.230575 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:39.389554 containerd[1626]: time="2025-03-17T17:49:39.388732175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:0,}" Mar 17 17:49:39.507709 containerd[1626]: time="2025-03-17T17:49:39.506609146Z" level=error msg="Failed to destroy network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:39.507709 containerd[1626]: time="2025-03-17T17:49:39.507023053Z" level=error msg="encountered an error cleaning up failed sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:39.507709 containerd[1626]: time="2025-03-17T17:49:39.507087911Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:0,} failed, error" error="failed to setup network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:39.508104 kubelet[2013]: E0317 17:49:39.507317 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:39.508104 kubelet[2013]: E0317 17:49:39.507380 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:39.508104 kubelet[2013]: E0317 17:49:39.507411 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:39.508270 kubelet[2013]: E0317 17:49:39.507453 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-brdvj" podUID="70c97941-fbff-42a7-bee6-390922be5bb6" Mar 17 17:49:39.893866 kubelet[2013]: E0317 17:49:39.893114 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:40.113087 kubelet[2013]: I0317 17:49:40.112780 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d" Mar 17 17:49:40.114443 containerd[1626]: time="2025-03-17T17:49:40.113961526Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" Mar 17 17:49:40.114443 containerd[1626]: time="2025-03-17T17:49:40.114252312Z" level=info msg="Ensure that sandbox a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d in task-service has been cleanup successfully" Mar 17 17:49:40.119858 containerd[1626]: time="2025-03-17T17:49:40.117769483Z" level=info msg="TearDown network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" successfully" Mar 17 17:49:40.119858 containerd[1626]: time="2025-03-17T17:49:40.117816933Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" returns successfully" Mar 17 17:49:40.119502 systemd[1]: run-netns-cni\x2d1fce7e33\x2dbd86\x2d9bb2\x2dc1c0\x2d8fd8d0ead11d.mount: Deactivated successfully. Mar 17 17:49:40.120364 containerd[1626]: time="2025-03-17T17:49:40.120276395Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:49:40.120445 containerd[1626]: time="2025-03-17T17:49:40.120409581Z" level=info msg="TearDown network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" successfully" Mar 17 17:49:40.120445 containerd[1626]: time="2025-03-17T17:49:40.120426012Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" returns successfully" Mar 17 17:49:40.123464 containerd[1626]: time="2025-03-17T17:49:40.123413159Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:49:40.123591 containerd[1626]: time="2025-03-17T17:49:40.123567268Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:49:40.123591 containerd[1626]: time="2025-03-17T17:49:40.123584199Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:49:40.124325 kubelet[2013]: I0317 17:49:40.124290 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd" Mar 17 17:49:40.125762 containerd[1626]: time="2025-03-17T17:49:40.125350644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:3,}" Mar 17 17:49:40.126535 containerd[1626]: time="2025-03-17T17:49:40.126496263Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" Mar 17 17:49:40.127562 containerd[1626]: time="2025-03-17T17:49:40.127517869Z" level=info msg="Ensure that sandbox 9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd in task-service has been cleanup successfully" Mar 17 17:49:40.129853 containerd[1626]: time="2025-03-17T17:49:40.129802910Z" level=info msg="TearDown network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" successfully" Mar 17 17:49:40.130037 containerd[1626]: time="2025-03-17T17:49:40.130008305Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" returns successfully" Mar 17 17:49:40.133058 systemd[1]: run-netns-cni\x2d719efdae\x2ddbb2\x2df93f\x2d7b70\x2db22915131196.mount: Deactivated successfully. Mar 17 17:49:40.135517 containerd[1626]: time="2025-03-17T17:49:40.135470824Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:1,}" Mar 17 17:49:40.135999 systemd-resolved[1481]: Using degraded feature set UDP instead of UDP+EDNS0 for DNS server 67.207.67.3. Mar 17 17:49:40.304734 containerd[1626]: time="2025-03-17T17:49:40.303584326Z" level=error msg="Failed to destroy network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:40.305184 containerd[1626]: time="2025-03-17T17:49:40.305120622Z" level=error msg="encountered an error cleaning up failed sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:40.305383 containerd[1626]: time="2025-03-17T17:49:40.305343836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:3,} failed, error" error="failed to setup network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:40.305947 kubelet[2013]: E0317 17:49:40.305889 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:40.306077 kubelet[2013]: E0317 17:49:40.305953 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:40.306077 kubelet[2013]: E0317 17:49:40.305994 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:40.306077 kubelet[2013]: E0317 17:49:40.306039 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:40.348905 containerd[1626]: time="2025-03-17T17:49:40.348841199Z" level=error msg="Failed to destroy network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:40.349521 containerd[1626]: time="2025-03-17T17:49:40.349467880Z" level=error msg="encountered an error cleaning up failed sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:40.349777 containerd[1626]: time="2025-03-17T17:49:40.349740284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:1,} failed, error" error="failed to setup network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:40.350391 kubelet[2013]: E0317 17:49:40.350335 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:40.350492 kubelet[2013]: E0317 17:49:40.350413 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:40.350492 kubelet[2013]: E0317 17:49:40.350435 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:40.351197 kubelet[2013]: E0317 17:49:40.351002 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-brdvj" podUID="70c97941-fbff-42a7-bee6-390922be5bb6" Mar 17 17:49:40.894162 kubelet[2013]: E0317 17:49:40.893916 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:41.095945 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104-shm.mount: Deactivated successfully. Mar 17 17:49:41.129747 kubelet[2013]: I0317 17:49:41.129460 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104" Mar 17 17:49:41.131056 containerd[1626]: time="2025-03-17T17:49:41.130976721Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\"" Mar 17 17:49:41.131895 containerd[1626]: time="2025-03-17T17:49:41.131455241Z" level=info msg="Ensure that sandbox 94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104 in task-service has been cleanup successfully" Mar 17 17:49:41.134546 containerd[1626]: time="2025-03-17T17:49:41.132054041Z" level=info msg="TearDown network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" successfully" Mar 17 17:49:41.134546 containerd[1626]: time="2025-03-17T17:49:41.133806748Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" returns successfully" Mar 17 17:49:41.134546 containerd[1626]: time="2025-03-17T17:49:41.134323845Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" Mar 17 17:49:41.136115 containerd[1626]: time="2025-03-17T17:49:41.135787559Z" level=info msg="TearDown network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" successfully" Mar 17 17:49:41.136115 containerd[1626]: time="2025-03-17T17:49:41.135838353Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" returns successfully" Mar 17 17:49:41.137648 systemd[1]: run-netns-cni\x2dea880ab3\x2dcf2c\x2d64f2\x2dac89\x2d724c5e4aa337.mount: Deactivated successfully. Mar 17 17:49:41.138839 containerd[1626]: time="2025-03-17T17:49:41.138067051Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:49:41.138839 containerd[1626]: time="2025-03-17T17:49:41.138189816Z" level=info msg="TearDown network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" successfully" Mar 17 17:49:41.138839 containerd[1626]: time="2025-03-17T17:49:41.138201139Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" returns successfully" Mar 17 17:49:41.139506 containerd[1626]: time="2025-03-17T17:49:41.139427827Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:49:41.140471 kubelet[2013]: I0317 17:49:41.140307 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d" Mar 17 17:49:41.140763 containerd[1626]: time="2025-03-17T17:49:41.140292574Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:49:41.140763 containerd[1626]: time="2025-03-17T17:49:41.140341846Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:49:41.141957 containerd[1626]: time="2025-03-17T17:49:41.141922901Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:4,}" Mar 17 17:49:41.143054 containerd[1626]: time="2025-03-17T17:49:41.142482313Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\"" Mar 17 17:49:41.143054 containerd[1626]: time="2025-03-17T17:49:41.142829809Z" level=info msg="Ensure that sandbox bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d in task-service has been cleanup successfully" Mar 17 17:49:41.143290 containerd[1626]: time="2025-03-17T17:49:41.143257807Z" level=info msg="TearDown network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" successfully" Mar 17 17:49:41.143419 containerd[1626]: time="2025-03-17T17:49:41.143400917Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" returns successfully" Mar 17 17:49:41.146660 systemd[1]: run-netns-cni\x2d17daa698\x2dadcf\x2d550f\x2d3eac\x2d3d4d987ce0e3.mount: Deactivated successfully. Mar 17 17:49:41.150097 containerd[1626]: time="2025-03-17T17:49:41.149936268Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" Mar 17 17:49:41.150700 containerd[1626]: time="2025-03-17T17:49:41.150550525Z" level=info msg="TearDown network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" successfully" Mar 17 17:49:41.150700 containerd[1626]: time="2025-03-17T17:49:41.150586377Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" returns successfully" Mar 17 17:49:41.153078 containerd[1626]: time="2025-03-17T17:49:41.152985913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:2,}" Mar 17 17:49:41.362315 containerd[1626]: time="2025-03-17T17:49:41.362136573Z" level=error msg="Failed to destroy network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:41.363269 containerd[1626]: time="2025-03-17T17:49:41.362995658Z" level=error msg="encountered an error cleaning up failed sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:41.363269 containerd[1626]: time="2025-03-17T17:49:41.363099483Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:4,} failed, error" error="failed to setup network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:41.363479 kubelet[2013]: E0317 17:49:41.363418 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:41.363569 kubelet[2013]: E0317 17:49:41.363488 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:41.363569 kubelet[2013]: E0317 17:49:41.363512 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:41.363569 kubelet[2013]: E0317 17:49:41.363553 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:41.393226 containerd[1626]: time="2025-03-17T17:49:41.393152026Z" level=error msg="Failed to destroy network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:41.393960 containerd[1626]: time="2025-03-17T17:49:41.393914446Z" level=error msg="encountered an error cleaning up failed sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:41.394042 containerd[1626]: time="2025-03-17T17:49:41.394013981Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:2,} failed, error" error="failed to setup network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:41.394360 kubelet[2013]: E0317 17:49:41.394314 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:41.394526 kubelet[2013]: E0317 17:49:41.394416 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:41.394526 kubelet[2013]: E0317 17:49:41.394447 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:41.394588 kubelet[2013]: E0317 17:49:41.394509 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-brdvj" podUID="70c97941-fbff-42a7-bee6-390922be5bb6" Mar 17 17:49:41.897504 kubelet[2013]: E0317 17:49:41.897277 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:42.094447 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1-shm.mount: Deactivated successfully. Mar 17 17:49:42.096015 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e-shm.mount: Deactivated successfully. Mar 17 17:49:42.152007 kubelet[2013]: I0317 17:49:42.151873 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e" Mar 17 17:49:42.153074 containerd[1626]: time="2025-03-17T17:49:42.153033599Z" level=info msg="StopPodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\"" Mar 17 17:49:42.154949 containerd[1626]: time="2025-03-17T17:49:42.154662685Z" level=info msg="Ensure that sandbox 21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e in task-service has been cleanup successfully" Mar 17 17:49:42.158142 containerd[1626]: time="2025-03-17T17:49:42.155383655Z" level=info msg="TearDown network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" successfully" Mar 17 17:49:42.158142 containerd[1626]: time="2025-03-17T17:49:42.155419804Z" level=info msg="StopPodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" returns successfully" Mar 17 17:49:42.158602 containerd[1626]: time="2025-03-17T17:49:42.158437655Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\"" Mar 17 17:49:42.158602 containerd[1626]: time="2025-03-17T17:49:42.158547650Z" level=info msg="TearDown network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" successfully" Mar 17 17:49:42.158602 containerd[1626]: time="2025-03-17T17:49:42.158563319Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" returns successfully" Mar 17 17:49:42.160417 systemd[1]: run-netns-cni\x2d7abcceb2\x2decb6\x2d2ea7\x2dda55\x2dab770963369f.mount: Deactivated successfully. Mar 17 17:49:42.163954 containerd[1626]: time="2025-03-17T17:49:42.162951975Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" Mar 17 17:49:42.163954 containerd[1626]: time="2025-03-17T17:49:42.163105024Z" level=info msg="TearDown network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" successfully" Mar 17 17:49:42.163954 containerd[1626]: time="2025-03-17T17:49:42.163125809Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" returns successfully" Mar 17 17:49:42.166573 containerd[1626]: time="2025-03-17T17:49:42.166488541Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:49:42.166765 containerd[1626]: time="2025-03-17T17:49:42.166620195Z" level=info msg="TearDown network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" successfully" Mar 17 17:49:42.166765 containerd[1626]: time="2025-03-17T17:49:42.166633593Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" returns successfully" Mar 17 17:49:42.167245 kubelet[2013]: I0317 17:49:42.167040 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1" Mar 17 17:49:42.168054 containerd[1626]: time="2025-03-17T17:49:42.168008977Z" level=info msg="StopPodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\"" Mar 17 17:49:42.168402 containerd[1626]: time="2025-03-17T17:49:42.168020110Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:49:42.168402 containerd[1626]: time="2025-03-17T17:49:42.168268928Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:49:42.168402 containerd[1626]: time="2025-03-17T17:49:42.168283495Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:49:42.168402 containerd[1626]: time="2025-03-17T17:49:42.168327080Z" level=info msg="Ensure that sandbox 3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1 in task-service has been cleanup successfully" Mar 17 17:49:42.172336 systemd[1]: run-netns-cni\x2d06abe312\x2db7f9\x2dae86\x2de195\x2dc4889cdfde32.mount: Deactivated successfully. Mar 17 17:49:42.173021 containerd[1626]: time="2025-03-17T17:49:42.172974103Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:5,}" Mar 17 17:49:42.176877 containerd[1626]: time="2025-03-17T17:49:42.176761415Z" level=info msg="TearDown network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" successfully" Mar 17 17:49:42.177056 containerd[1626]: time="2025-03-17T17:49:42.176806551Z" level=info msg="StopPodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" returns successfully" Mar 17 17:49:42.177944 containerd[1626]: time="2025-03-17T17:49:42.177640147Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\"" Mar 17 17:49:42.177944 containerd[1626]: time="2025-03-17T17:49:42.177822208Z" level=info msg="TearDown network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" successfully" Mar 17 17:49:42.177944 containerd[1626]: time="2025-03-17T17:49:42.177842015Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" returns successfully" Mar 17 17:49:42.178452 containerd[1626]: time="2025-03-17T17:49:42.178412761Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" Mar 17 17:49:42.178587 containerd[1626]: time="2025-03-17T17:49:42.178566732Z" level=info msg="TearDown network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" successfully" Mar 17 17:49:42.178618 containerd[1626]: time="2025-03-17T17:49:42.178586146Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" returns successfully" Mar 17 17:49:42.179266 containerd[1626]: time="2025-03-17T17:49:42.179198911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:3,}" Mar 17 17:49:42.373358 containerd[1626]: time="2025-03-17T17:49:42.372596581Z" level=error msg="Failed to destroy network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:42.373358 containerd[1626]: time="2025-03-17T17:49:42.373019299Z" level=error msg="encountered an error cleaning up failed sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:42.373358 containerd[1626]: time="2025-03-17T17:49:42.373091460Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:5,} failed, error" error="failed to setup network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:42.373633 kubelet[2013]: E0317 17:49:42.373365 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:42.373633 kubelet[2013]: E0317 17:49:42.373439 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:42.373633 kubelet[2013]: E0317 17:49:42.373464 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:42.373776 kubelet[2013]: E0317 17:49:42.373515 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:42.379036 containerd[1626]: time="2025-03-17T17:49:42.378967582Z" level=error msg="Failed to destroy network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:42.380456 containerd[1626]: time="2025-03-17T17:49:42.380160864Z" level=error msg="encountered an error cleaning up failed sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:42.380456 containerd[1626]: time="2025-03-17T17:49:42.380277433Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:3,} failed, error" error="failed to setup network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:42.381808 kubelet[2013]: E0317 17:49:42.381061 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:42.381808 kubelet[2013]: E0317 17:49:42.381149 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:42.381808 kubelet[2013]: E0317 17:49:42.381182 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:42.382191 kubelet[2013]: E0317 17:49:42.381318 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-brdvj" podUID="70c97941-fbff-42a7-bee6-390922be5bb6" Mar 17 17:49:42.898468 kubelet[2013]: E0317 17:49:42.898344 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:43.095236 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71-shm.mount: Deactivated successfully. Mar 17 17:49:43.095392 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef-shm.mount: Deactivated successfully. Mar 17 17:49:43.171297 kubelet[2013]: I0317 17:49:43.171164 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef" Mar 17 17:49:43.171927 containerd[1626]: time="2025-03-17T17:49:43.171872019Z" level=info msg="StopPodSandbox for \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\"" Mar 17 17:49:43.175792 containerd[1626]: time="2025-03-17T17:49:43.173007811Z" level=info msg="Ensure that sandbox 017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef in task-service has been cleanup successfully" Mar 17 17:49:43.175792 containerd[1626]: time="2025-03-17T17:49:43.173777875Z" level=info msg="TearDown network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\" successfully" Mar 17 17:49:43.175792 containerd[1626]: time="2025-03-17T17:49:43.173819182Z" level=info msg="StopPodSandbox for \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\" returns successfully" Mar 17 17:49:43.175792 containerd[1626]: time="2025-03-17T17:49:43.174936971Z" level=info msg="StopPodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\"" Mar 17 17:49:43.175792 containerd[1626]: time="2025-03-17T17:49:43.175062395Z" level=info msg="TearDown network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" successfully" Mar 17 17:49:43.175792 containerd[1626]: time="2025-03-17T17:49:43.175136196Z" level=info msg="StopPodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" returns successfully" Mar 17 17:49:43.179040 containerd[1626]: time="2025-03-17T17:49:43.178073836Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\"" Mar 17 17:49:43.179040 containerd[1626]: time="2025-03-17T17:49:43.178198183Z" level=info msg="TearDown network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" successfully" Mar 17 17:49:43.179040 containerd[1626]: time="2025-03-17T17:49:43.178212707Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" returns successfully" Mar 17 17:49:43.178623 systemd[1]: run-netns-cni\x2d1a909ef0\x2d26d9\x2dd984\x2ddcce\x2db7095424564a.mount: Deactivated successfully. Mar 17 17:49:43.181387 containerd[1626]: time="2025-03-17T17:49:43.180408560Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" Mar 17 17:49:43.181387 containerd[1626]: time="2025-03-17T17:49:43.180535769Z" level=info msg="TearDown network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" successfully" Mar 17 17:49:43.181387 containerd[1626]: time="2025-03-17T17:49:43.180552197Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" returns successfully" Mar 17 17:49:43.181609 containerd[1626]: time="2025-03-17T17:49:43.181516119Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:49:43.181646 containerd[1626]: time="2025-03-17T17:49:43.181620002Z" level=info msg="TearDown network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" successfully" Mar 17 17:49:43.181646 containerd[1626]: time="2025-03-17T17:49:43.181632969Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" returns successfully" Mar 17 17:49:43.182538 containerd[1626]: time="2025-03-17T17:49:43.182445464Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:49:43.182654 containerd[1626]: time="2025-03-17T17:49:43.182599430Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:49:43.182654 containerd[1626]: time="2025-03-17T17:49:43.182609453Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:49:43.183943 containerd[1626]: time="2025-03-17T17:49:43.183292358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:6,}" Mar 17 17:49:43.184053 kubelet[2013]: I0317 17:49:43.183472 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71" Mar 17 17:49:43.184968 containerd[1626]: time="2025-03-17T17:49:43.184920198Z" level=info msg="StopPodSandbox for \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\"" Mar 17 17:49:43.185465 containerd[1626]: time="2025-03-17T17:49:43.185444091Z" level=info msg="Ensure that sandbox 6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71 in task-service has been cleanup successfully" Mar 17 17:49:43.185881 containerd[1626]: time="2025-03-17T17:49:43.185859744Z" level=info msg="TearDown network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\" successfully" Mar 17 17:49:43.185975 containerd[1626]: time="2025-03-17T17:49:43.185962327Z" level=info msg="StopPodSandbox for \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\" returns successfully" Mar 17 17:49:43.190235 containerd[1626]: time="2025-03-17T17:49:43.190183826Z" level=info msg="StopPodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\"" Mar 17 17:49:43.190499 containerd[1626]: time="2025-03-17T17:49:43.190479304Z" level=info msg="TearDown network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" successfully" Mar 17 17:49:43.191022 containerd[1626]: time="2025-03-17T17:49:43.190982268Z" level=info msg="StopPodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" returns successfully" Mar 17 17:49:43.191110 systemd[1]: run-netns-cni\x2d0db27a35\x2df871\x2dae2d\x2d8958\x2d424a6b083ee5.mount: Deactivated successfully. Mar 17 17:49:43.193036 containerd[1626]: time="2025-03-17T17:49:43.192999466Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\"" Mar 17 17:49:43.193606 containerd[1626]: time="2025-03-17T17:49:43.193273462Z" level=info msg="TearDown network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" successfully" Mar 17 17:49:43.193606 containerd[1626]: time="2025-03-17T17:49:43.193303646Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" returns successfully" Mar 17 17:49:43.194527 containerd[1626]: time="2025-03-17T17:49:43.194351619Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" Mar 17 17:49:43.195724 containerd[1626]: time="2025-03-17T17:49:43.194870524Z" level=info msg="TearDown network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" successfully" Mar 17 17:49:43.195724 containerd[1626]: time="2025-03-17T17:49:43.194892923Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" returns successfully" Mar 17 17:49:43.196707 containerd[1626]: time="2025-03-17T17:49:43.196656711Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:4,}" Mar 17 17:49:43.359091 containerd[1626]: time="2025-03-17T17:49:43.359022523Z" level=error msg="Failed to destroy network for sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:43.361029 containerd[1626]: time="2025-03-17T17:49:43.360944392Z" level=error msg="encountered an error cleaning up failed sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:43.361201 containerd[1626]: time="2025-03-17T17:49:43.361076488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:4,} failed, error" error="failed to setup network for sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:43.362460 containerd[1626]: time="2025-03-17T17:49:43.362198049Z" level=error msg="Failed to destroy network for sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:43.362550 kubelet[2013]: E0317 17:49:43.361413 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:43.362550 kubelet[2013]: E0317 17:49:43.361510 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:43.362550 kubelet[2013]: E0317 17:49:43.361543 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="default/nginx-deployment-85f456d6dd-brdvj" Mar 17 17:49:43.363283 kubelet[2013]: E0317 17:49:43.361604 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"nginx-deployment-85f456d6dd-brdvj_default(70c97941-fbff-42a7-bee6-390922be5bb6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="default/nginx-deployment-85f456d6dd-brdvj" podUID="70c97941-fbff-42a7-bee6-390922be5bb6" Mar 17 17:49:43.363443 containerd[1626]: time="2025-03-17T17:49:43.363404592Z" level=error msg="encountered an error cleaning up failed sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:43.363522 containerd[1626]: time="2025-03-17T17:49:43.363491582Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:6,} failed, error" error="failed to setup network for sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:43.363987 kubelet[2013]: E0317 17:49:43.363903 2013 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Mar 17 17:49:43.364513 kubelet[2013]: E0317 17:49:43.364083 2013 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:43.364513 kubelet[2013]: E0317 17:49:43.364308 2013 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-wnlj5" Mar 17 17:49:43.364513 kubelet[2013]: E0317 17:49:43.364378 2013 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-wnlj5_calico-system(a99e263c-7608-426a-abb5-cac9dbd7d1b7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-wnlj5" podUID="a99e263c-7608-426a-abb5-cac9dbd7d1b7" Mar 17 17:49:43.453277 containerd[1626]: time="2025-03-17T17:49:43.453133445Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:43.455075 containerd[1626]: time="2025-03-17T17:49:43.454996749Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.29.2: active requests=0, bytes read=142241445" Mar 17 17:49:43.455956 containerd[1626]: time="2025-03-17T17:49:43.455882029Z" level=info msg="ImageCreate event name:\"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:43.458282 containerd[1626]: time="2025-03-17T17:49:43.458191057Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:43.459384 containerd[1626]: time="2025-03-17T17:49:43.459169479Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.29.2\" with image id \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\", repo tag \"ghcr.io/flatcar/calico/node:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:d9a21be37fe591ee5ab5a2e3dc26408ea165a44a55705102ffaa002de9908b32\", size \"142241307\" in 6.373674862s" Mar 17 17:49:43.459384 containerd[1626]: time="2025-03-17T17:49:43.459224438Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.29.2\" returns image reference \"sha256:048bf7af1f8c697d151dbecc478a18e89d89ed8627da98e17a56c11b3d45d351\"" Mar 17 17:49:43.489642 containerd[1626]: time="2025-03-17T17:49:43.489487859Z" level=info msg="CreateContainer within sandbox \"77a29713031006806b365cdd1379bd831c516241aa7f17e6a0d86822c80942c2\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Mar 17 17:49:43.504080 containerd[1626]: time="2025-03-17T17:49:43.504028054Z" level=info msg="CreateContainer within sandbox \"77a29713031006806b365cdd1379bd831c516241aa7f17e6a0d86822c80942c2\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"0153fe03c411ec4f3ab4f1cc7c63538c0487b5788b8da79e06f1e30c89d52ad3\"" Mar 17 17:49:43.506738 containerd[1626]: time="2025-03-17T17:49:43.505243997Z" level=info msg="StartContainer for \"0153fe03c411ec4f3ab4f1cc7c63538c0487b5788b8da79e06f1e30c89d52ad3\"" Mar 17 17:49:43.651456 containerd[1626]: time="2025-03-17T17:49:43.651362771Z" level=info msg="StartContainer for \"0153fe03c411ec4f3ab4f1cc7c63538c0487b5788b8da79e06f1e30c89d52ad3\" returns successfully" Mar 17 17:49:43.761425 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Mar 17 17:49:43.761643 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Mar 17 17:49:43.899212 kubelet[2013]: E0317 17:49:43.899136 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:44.096518 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2-shm.mount: Deactivated successfully. Mar 17 17:49:44.096755 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960-shm.mount: Deactivated successfully. Mar 17 17:49:44.096864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount681390843.mount: Deactivated successfully. Mar 17 17:49:44.190319 kubelet[2013]: E0317 17:49:44.190129 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:44.195504 kubelet[2013]: I0317 17:49:44.195474 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960" Mar 17 17:49:44.198750 containerd[1626]: time="2025-03-17T17:49:44.196068104Z" level=info msg="StopPodSandbox for \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\"" Mar 17 17:49:44.198750 containerd[1626]: time="2025-03-17T17:49:44.196263501Z" level=info msg="Ensure that sandbox b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960 in task-service has been cleanup successfully" Mar 17 17:49:44.198897 systemd[1]: run-netns-cni\x2d4aa386a0\x2d81e6\x2d0ae9\x2de8e0\x2d1508e4bee309.mount: Deactivated successfully. Mar 17 17:49:44.202209 containerd[1626]: time="2025-03-17T17:49:44.200499060Z" level=info msg="TearDown network for sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\" successfully" Mar 17 17:49:44.202209 containerd[1626]: time="2025-03-17T17:49:44.200545848Z" level=info msg="StopPodSandbox for \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\" returns successfully" Mar 17 17:49:44.202209 containerd[1626]: time="2025-03-17T17:49:44.201559174Z" level=info msg="StopPodSandbox for \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\"" Mar 17 17:49:44.202610 containerd[1626]: time="2025-03-17T17:49:44.202524322Z" level=info msg="TearDown network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\" successfully" Mar 17 17:49:44.202610 containerd[1626]: time="2025-03-17T17:49:44.202548624Z" level=info msg="StopPodSandbox for \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\" returns successfully" Mar 17 17:49:44.203135 containerd[1626]: time="2025-03-17T17:49:44.203100078Z" level=info msg="StopPodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\"" Mar 17 17:49:44.203324 containerd[1626]: time="2025-03-17T17:49:44.203247166Z" level=info msg="TearDown network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" successfully" Mar 17 17:49:44.203407 containerd[1626]: time="2025-03-17T17:49:44.203329015Z" level=info msg="StopPodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" returns successfully" Mar 17 17:49:44.204075 containerd[1626]: time="2025-03-17T17:49:44.204041726Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\"" Mar 17 17:49:44.204243 kubelet[2013]: I0317 17:49:44.204106 2013 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2" Mar 17 17:49:44.205113 containerd[1626]: time="2025-03-17T17:49:44.204593996Z" level=info msg="TearDown network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" successfully" Mar 17 17:49:44.205113 containerd[1626]: time="2025-03-17T17:49:44.205046698Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" returns successfully" Mar 17 17:49:44.205301 containerd[1626]: time="2025-03-17T17:49:44.204950005Z" level=info msg="StopPodSandbox for \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\"" Mar 17 17:49:44.206548 containerd[1626]: time="2025-03-17T17:49:44.205767863Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" Mar 17 17:49:44.206548 containerd[1626]: time="2025-03-17T17:49:44.206428237Z" level=info msg="TearDown network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" successfully" Mar 17 17:49:44.206548 containerd[1626]: time="2025-03-17T17:49:44.206445987Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" returns successfully" Mar 17 17:49:44.206548 containerd[1626]: time="2025-03-17T17:49:44.206504002Z" level=info msg="Ensure that sandbox 02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2 in task-service has been cleanup successfully" Mar 17 17:49:44.207899 containerd[1626]: time="2025-03-17T17:49:44.207176993Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:49:44.207899 containerd[1626]: time="2025-03-17T17:49:44.207297075Z" level=info msg="TearDown network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" successfully" Mar 17 17:49:44.207899 containerd[1626]: time="2025-03-17T17:49:44.207312894Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" returns successfully" Mar 17 17:49:44.207899 containerd[1626]: time="2025-03-17T17:49:44.207645904Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:49:44.207899 containerd[1626]: time="2025-03-17T17:49:44.207778605Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:49:44.207899 containerd[1626]: time="2025-03-17T17:49:44.207795019Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:49:44.209135 containerd[1626]: time="2025-03-17T17:49:44.208859466Z" level=info msg="TearDown network for sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\" successfully" Mar 17 17:49:44.209135 containerd[1626]: time="2025-03-17T17:49:44.208896931Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:7,}" Mar 17 17:49:44.209135 containerd[1626]: time="2025-03-17T17:49:44.208910965Z" level=info msg="StopPodSandbox for \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\" returns successfully" Mar 17 17:49:44.209722 containerd[1626]: time="2025-03-17T17:49:44.209419888Z" level=info msg="StopPodSandbox for \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\"" Mar 17 17:49:44.209722 containerd[1626]: time="2025-03-17T17:49:44.209536955Z" level=info msg="TearDown network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\" successfully" Mar 17 17:49:44.209722 containerd[1626]: time="2025-03-17T17:49:44.209558148Z" level=info msg="StopPodSandbox for \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\" returns successfully" Mar 17 17:49:44.214703 containerd[1626]: time="2025-03-17T17:49:44.211870909Z" level=info msg="StopPodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\"" Mar 17 17:49:44.214703 containerd[1626]: time="2025-03-17T17:49:44.212025640Z" level=info msg="TearDown network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" successfully" Mar 17 17:49:44.214703 containerd[1626]: time="2025-03-17T17:49:44.212047976Z" level=info msg="StopPodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" returns successfully" Mar 17 17:49:44.212268 systemd[1]: run-netns-cni\x2dd9904766\x2d9928\x2dc873\x2dfe4e\x2d6c48ee4e9937.mount: Deactivated successfully. Mar 17 17:49:44.215385 containerd[1626]: time="2025-03-17T17:49:44.215343898Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\"" Mar 17 17:49:44.215733 containerd[1626]: time="2025-03-17T17:49:44.215648787Z" level=info msg="TearDown network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" successfully" Mar 17 17:49:44.215853 containerd[1626]: time="2025-03-17T17:49:44.215834232Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" returns successfully" Mar 17 17:49:44.217400 containerd[1626]: time="2025-03-17T17:49:44.217342778Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" Mar 17 17:49:44.217580 containerd[1626]: time="2025-03-17T17:49:44.217500819Z" level=info msg="TearDown network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" successfully" Mar 17 17:49:44.217580 containerd[1626]: time="2025-03-17T17:49:44.217531513Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" returns successfully" Mar 17 17:49:44.218778 containerd[1626]: time="2025-03-17T17:49:44.218347011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:5,}" Mar 17 17:49:44.876588 kubelet[2013]: E0317 17:49:44.876499 2013 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:44.900403 kubelet[2013]: E0317 17:49:44.900338 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:45.081839 systemd-networkd[1222]: cali32096398dcc: Link UP Mar 17 17:49:45.083761 systemd-networkd[1222]: cali32096398dcc: Gained carrier Mar 17 17:49:45.148548 kubelet[2013]: I0317 17:49:45.148211 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6mh4d" podStartSLOduration=4.838895127 podStartE2EDuration="20.148155864s" podCreationTimestamp="2025-03-17 17:49:25 +0000 UTC" firstStartedPulling="2025-03-17 17:49:28.151495846 +0000 UTC m=+3.748263321" lastFinishedPulling="2025-03-17 17:49:43.460756589 +0000 UTC m=+19.057524058" observedRunningTime="2025-03-17 17:49:44.347787049 +0000 UTC m=+19.944554541" watchObservedRunningTime="2025-03-17 17:49:45.148155864 +0000 UTC m=+20.744923357" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.308 [INFO][2879] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.393 [INFO][2879] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0 nginx-deployment-85f456d6dd- default 70c97941-fbff-42a7-bee6-390922be5bb6 1083 0 2025-03-17 17:49:39 +0000 UTC map[app:nginx pod-template-hash:85f456d6dd projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 209.38.135.89 nginx-deployment-85f456d6dd-brdvj eth0 default [] [] [kns.default ksa.default.default] cali32096398dcc [] []}} ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Namespace="default" Pod="nginx-deployment-85f456d6dd-brdvj" WorkloadEndpoint="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.394 [INFO][2879] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Namespace="default" Pod="nginx-deployment-85f456d6dd-brdvj" WorkloadEndpoint="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.546 [INFO][2903] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" HandleID="k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Workload="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.599 [INFO][2903] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" HandleID="k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Workload="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0004e2e60), Attrs:map[string]string{"namespace":"default", "node":"209.38.135.89", "pod":"nginx-deployment-85f456d6dd-brdvj", "timestamp":"2025-03-17 17:49:44.54659192 +0000 UTC"}, Hostname:"209.38.135.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.599 [INFO][2903] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.599 [INFO][2903] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.599 [INFO][2903] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '209.38.135.89' Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.623 [INFO][2903] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.642 [INFO][2903] ipam/ipam.go 372: Looking up existing affinities for host host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.728 [INFO][2903] ipam/ipam.go 521: Ran out of existing affine blocks for host host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.739 [INFO][2903] ipam/ipam.go 538: Tried all affine blocks. Looking for an affine block with space, or a new unclaimed block host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.754 [INFO][2903] ipam/ipam_block_reader_writer.go 154: Found free block: 192.168.48.0/26 Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.754 [INFO][2903] ipam/ipam.go 550: Found unclaimed block host="209.38.135.89" subnet=192.168.48.0/26 Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.754 [INFO][2903] ipam/ipam_block_reader_writer.go 171: Trying to create affinity in pending state host="209.38.135.89" subnet=192.168.48.0/26 Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.808 [INFO][2903] ipam/ipam_block_reader_writer.go 201: Successfully created pending affinity for block host="209.38.135.89" subnet=192.168.48.0/26 Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.809 [INFO][2903] ipam/ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.839 [INFO][2903] ipam/ipam.go 160: The referenced block doesn't exist, trying to create it cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.856 [INFO][2903] ipam/ipam.go 167: Wrote affinity as pending cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.862 [INFO][2903] ipam/ipam.go 176: Attempting to claim the block cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.862 [INFO][2903] ipam/ipam_block_reader_writer.go 223: Attempting to create a new block host="209.38.135.89" subnet=192.168.48.0/26 Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.891 [INFO][2903] ipam/ipam_block_reader_writer.go 264: Successfully created block Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.891 [INFO][2903] ipam/ipam_block_reader_writer.go 275: Confirming affinity host="209.38.135.89" subnet=192.168.48.0/26 Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.925 [INFO][2903] ipam/ipam_block_reader_writer.go 290: Successfully confirmed affinity host="209.38.135.89" subnet=192.168.48.0/26 Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.925 [INFO][2903] ipam/ipam.go 585: Block '192.168.48.0/26' has 64 free ips which is more than 1 ips required. host="209.38.135.89" subnet=192.168.48.0/26 Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.925 [INFO][2903] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.953 [INFO][2903] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:44.972 [INFO][2903] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" host="209.38.135.89" Mar 17 17:49:45.150615 containerd[1626]: 2025-03-17 17:49:45.061 [INFO][2903] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.48.0/26] block=192.168.48.0/26 handle="k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" host="209.38.135.89" Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.062 [INFO][2903] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.0/26] handle="k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" host="209.38.135.89" Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.062 [INFO][2903] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.062 [INFO][2903] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.0/26] IPv6=[] ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" HandleID="k8s-pod-network.50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Workload="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.066 [INFO][2879] cni-plugin/k8s.go 386: Populated endpoint ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Namespace="default" Pod="nginx-deployment-85f456d6dd-brdvj" WorkloadEndpoint="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"70c97941-fbff-42a7-bee6-390922be5bb6", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.135.89", ContainerID:"", Pod:"nginx-deployment-85f456d6dd-brdvj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali32096398dcc", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.067 [INFO][2879] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.48.0/32] ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Namespace="default" Pod="nginx-deployment-85f456d6dd-brdvj" WorkloadEndpoint="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.067 [INFO][2879] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali32096398dcc ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Namespace="default" Pod="nginx-deployment-85f456d6dd-brdvj" WorkloadEndpoint="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.083 [INFO][2879] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Namespace="default" Pod="nginx-deployment-85f456d6dd-brdvj" WorkloadEndpoint="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.083 [INFO][2879] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Namespace="default" Pod="nginx-deployment-85f456d6dd-brdvj" WorkloadEndpoint="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0", GenerateName:"nginx-deployment-85f456d6dd-", Namespace:"default", SelfLink:"", UID:"70c97941-fbff-42a7-bee6-390922be5bb6", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nginx", "pod-template-hash":"85f456d6dd", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.135.89", ContainerID:"50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b", Pod:"nginx-deployment-85f456d6dd-brdvj", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.0/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali32096398dcc", MAC:"12:a3:36:6d:a9:20", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:45.152432 containerd[1626]: 2025-03-17 17:49:45.147 [INFO][2879] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b" Namespace="default" Pod="nginx-deployment-85f456d6dd-brdvj" WorkloadEndpoint="209.38.135.89-k8s-nginx--deployment--85f456d6dd--brdvj-eth0" Mar 17 17:49:45.182865 containerd[1626]: time="2025-03-17T17:49:45.182389347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:45.182865 containerd[1626]: time="2025-03-17T17:49:45.182711010Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:45.182865 containerd[1626]: time="2025-03-17T17:49:45.182736940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:45.189264 containerd[1626]: time="2025-03-17T17:49:45.183361450Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:45.206922 kubelet[2013]: I0317 17:49:45.206878 2013 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Mar 17 17:49:45.210594 kubelet[2013]: E0317 17:49:45.210554 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:45.230981 systemd[1]: run-containerd-runc-k8s.io-50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b-runc.fpGYYT.mount: Deactivated successfully. Mar 17 17:49:45.300593 containerd[1626]: time="2025-03-17T17:49:45.300466957Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-85f456d6dd-brdvj,Uid:70c97941-fbff-42a7-bee6-390922be5bb6,Namespace:default,Attempt:5,} returns sandbox id \"50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b\"" Mar 17 17:49:45.303763 containerd[1626]: time="2025-03-17T17:49:45.302960839Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:49:45.309277 systemd-resolved[1481]: Using degraded feature set TCP instead of UDP for DNS server 67.207.67.3. Mar 17 17:49:45.589280 systemd-networkd[1222]: cali38530f5beaa: Link UP Mar 17 17:49:45.598090 systemd-networkd[1222]: cali38530f5beaa: Gained carrier Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:44.294 [INFO][2869] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:44.393 [INFO][2869] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {209.38.135.89-k8s-csi--node--driver--wnlj5-eth0 csi-node-driver- calico-system a99e263c-7608-426a-abb5-cac9dbd7d1b7 979 0 2025-03-17 17:49:25 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:69ddf5d45d k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 209.38.135.89 csi-node-driver-wnlj5 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali38530f5beaa [] []}} ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Namespace="calico-system" Pod="csi-node-driver-wnlj5" WorkloadEndpoint="209.38.135.89-k8s-csi--node--driver--wnlj5-" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:44.394 [INFO][2869] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Namespace="calico-system" Pod="csi-node-driver-wnlj5" WorkloadEndpoint="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:44.565 [INFO][2901] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" HandleID="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Workload="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:44.622 [INFO][2901] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" HandleID="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Workload="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00031bae0), Attrs:map[string]string{"namespace":"calico-system", "node":"209.38.135.89", "pod":"csi-node-driver-wnlj5", "timestamp":"2025-03-17 17:49:44.565419093 +0000 UTC"}, Hostname:"209.38.135.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:44.622 [INFO][2901] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.063 [INFO][2901] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.063 [INFO][2901] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '209.38.135.89' Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.148 [INFO][2901] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" host="209.38.135.89" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.188 [INFO][2901] ipam/ipam.go 372: Looking up existing affinities for host host="209.38.135.89" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.265 [INFO][2901] ipam/ipam.go 489: Trying affinity for 192.168.48.0/26 host="209.38.135.89" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.278 [INFO][2901] ipam/ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.291 [INFO][2901] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.291 [INFO][2901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" host="209.38.135.89" Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.306 [INFO][2901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6 Mar 17 17:49:45.628514 containerd[1626]: 2025-03-17 17:49:45.334 [INFO][2901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" host="209.38.135.89" Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.373 [ERROR][2901] ipam/customresource.go 183: Error updating resource Key=IPAMBlock(192-168-48-0-26) Name="192-168-48-0-26" Resource="IPAMBlocks" Value=&v3.IPAMBlock{TypeMeta:v1.TypeMeta{Kind:"IPAMBlock", APIVersion:"crd.projectcalico.org/v1"}, ObjectMeta:v1.ObjectMeta{Name:"192-168-48-0-26", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"1135", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.IPAMBlockSpec{CIDR:"192.168.48.0/26", Affinity:(*string)(0xc0003bd2c0), Allocations:[]*int{(*int)(0xc000489da0), (*int)(0xc000489f58), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil), (*int)(nil)}, Unallocated:[]int{2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53, 54, 55, 56, 57, 58, 59, 60, 61, 62, 63}, Attributes:[]v3.AllocationAttribute{v3.AllocationAttribute{AttrPrimary:(*string)(0xc0003bd2d0), AttrSecondary:map[string]string{"namespace":"default", "node":"209.38.135.89", "pod":"nginx-deployment-85f456d6dd-brdvj", "timestamp":"2025-03-17 17:49:44.54659192 +0000 UTC"}}, v3.AllocationAttribute{AttrPrimary:(*string)(0xc00031bae0), AttrSecondary:map[string]string{"namespace":"calico-system", "node":"209.38.135.89", "pod":"csi-node-driver-wnlj5", "timestamp":"2025-03-17 17:49:44.565419093 +0000 UTC"}}}, SequenceNumber:0x182da865866b69a7, SequenceNumberForAllocation:map[string]uint64{"0":0x182da865866b69a5, "1":0x182da865866b69a6}, Deleted:false, DeprecatedStrictAffinity:false}} error=Operation cannot be fulfilled on ipamblocks.crd.projectcalico.org "192-168-48-0-26": the object has been modified; please apply your changes to the latest version and try again Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.374 [INFO][2901] ipam/ipam.go 1207: Failed to update block block=192.168.48.0/26 error=update conflict: IPAMBlock(192-168-48-0-26) handle="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" host="209.38.135.89" Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.502 [INFO][2901] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" host="209.38.135.89" Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.516 [INFO][2901] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6 Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.547 [INFO][2901] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" host="209.38.135.89" Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.576 [INFO][2901] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.48.2/26] block=192.168.48.0/26 handle="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" host="209.38.135.89" Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.576 [INFO][2901] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.2/26] handle="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" host="209.38.135.89" Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.576 [INFO][2901] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:49:45.629980 containerd[1626]: 2025-03-17 17:49:45.576 [INFO][2901] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.2/26] IPv6=[] ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" HandleID="k8s-pod-network.f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Workload="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" Mar 17 17:49:45.630317 containerd[1626]: 2025-03-17 17:49:45.578 [INFO][2869] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Namespace="calico-system" Pod="csi-node-driver-wnlj5" WorkloadEndpoint="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.135.89-k8s-csi--node--driver--wnlj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a99e263c-7608-426a-abb5-cac9dbd7d1b7", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.135.89", ContainerID:"", Pod:"csi-node-driver-wnlj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38530f5beaa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:45.630317 containerd[1626]: 2025-03-17 17:49:45.578 [INFO][2869] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.48.2/32] ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Namespace="calico-system" Pod="csi-node-driver-wnlj5" WorkloadEndpoint="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" Mar 17 17:49:45.630317 containerd[1626]: 2025-03-17 17:49:45.578 [INFO][2869] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali38530f5beaa ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Namespace="calico-system" Pod="csi-node-driver-wnlj5" WorkloadEndpoint="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" Mar 17 17:49:45.630317 containerd[1626]: 2025-03-17 17:49:45.591 [INFO][2869] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Namespace="calico-system" Pod="csi-node-driver-wnlj5" WorkloadEndpoint="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" Mar 17 17:49:45.630317 containerd[1626]: 2025-03-17 17:49:45.592 [INFO][2869] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Namespace="calico-system" Pod="csi-node-driver-wnlj5" WorkloadEndpoint="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.135.89-k8s-csi--node--driver--wnlj5-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"a99e263c-7608-426a-abb5-cac9dbd7d1b7", ResourceVersion:"979", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 49, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"69ddf5d45d", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.135.89", ContainerID:"f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6", Pod:"csi-node-driver-wnlj5", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.48.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali38530f5beaa", MAC:"1a:e0:19:56:12:30", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:49:45.630317 containerd[1626]: 2025-03-17 17:49:45.626 [INFO][2869] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6" Namespace="calico-system" Pod="csi-node-driver-wnlj5" WorkloadEndpoint="209.38.135.89-k8s-csi--node--driver--wnlj5-eth0" Mar 17 17:49:45.671454 containerd[1626]: time="2025-03-17T17:49:45.670890903Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:49:45.671454 containerd[1626]: time="2025-03-17T17:49:45.670971807Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:49:45.671454 containerd[1626]: time="2025-03-17T17:49:45.670986282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:45.671454 containerd[1626]: time="2025-03-17T17:49:45.671145230Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:49:45.741421 containerd[1626]: time="2025-03-17T17:49:45.741254021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-wnlj5,Uid:a99e263c-7608-426a-abb5-cac9dbd7d1b7,Namespace:calico-system,Attempt:7,} returns sandbox id \"f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6\"" Mar 17 17:49:45.901232 kubelet[2013]: E0317 17:49:45.900477 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:46.221784 kubelet[2013]: E0317 17:49:46.219074 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:49:46.857097 systemd-networkd[1222]: cali38530f5beaa: Gained IPv6LL Mar 17 17:49:46.908726 kubelet[2013]: E0317 17:49:46.901417 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:46.921736 systemd-networkd[1222]: cali32096398dcc: Gained IPv6LL Mar 17 17:49:47.063713 kernel: bpftool[3198]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Mar 17 17:49:47.559130 systemd-networkd[1222]: vxlan.calico: Link UP Mar 17 17:49:47.559146 systemd-networkd[1222]: vxlan.calico: Gained carrier Mar 17 17:49:47.902133 kubelet[2013]: E0317 17:49:47.902071 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:48.839877 systemd-networkd[1222]: vxlan.calico: Gained IPv6LL Mar 17 17:49:48.863091 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1189571127.mount: Deactivated successfully. Mar 17 17:49:48.902778 kubelet[2013]: E0317 17:49:48.902701 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:49.903688 kubelet[2013]: E0317 17:49:49.903601 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:50.071794 systemd[1]: Started sshd@9-209.38.135.89:22-218.92.0.183:48410.service - OpenSSH per-connection server daemon (218.92.0.183:48410). Mar 17 17:49:50.301084 containerd[1626]: time="2025-03-17T17:49:50.300834127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:50.303182 containerd[1626]: time="2025-03-17T17:49:50.303098087Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=73060131" Mar 17 17:49:50.303810 containerd[1626]: time="2025-03-17T17:49:50.303740310Z" level=info msg="ImageCreate event name:\"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:50.307833 containerd[1626]: time="2025-03-17T17:49:50.307762575Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:50.310727 containerd[1626]: time="2025-03-17T17:49:50.309726941Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 5.006711895s" Mar 17 17:49:50.310727 containerd[1626]: time="2025-03-17T17:49:50.309784275Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:49:50.312702 containerd[1626]: time="2025-03-17T17:49:50.312619869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\"" Mar 17 17:49:50.314551 containerd[1626]: time="2025-03-17T17:49:50.314428736Z" level=info msg="CreateContainer within sandbox \"50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Mar 17 17:49:50.339577 containerd[1626]: time="2025-03-17T17:49:50.339477057Z" level=info msg="CreateContainer within sandbox \"50dc16c66c92a89fce80a3c1467a1440585a285a538afec8f4b8b73504d2e48b\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"d24bae5d8894a5e5363627523614be9dc1239ba14f828f3dfae0122095c05b1e\"" Mar 17 17:49:50.341442 containerd[1626]: time="2025-03-17T17:49:50.340233515Z" level=info msg="StartContainer for \"d24bae5d8894a5e5363627523614be9dc1239ba14f828f3dfae0122095c05b1e\"" Mar 17 17:49:50.436872 containerd[1626]: time="2025-03-17T17:49:50.436829203Z" level=info msg="StartContainer for \"d24bae5d8894a5e5363627523614be9dc1239ba14f828f3dfae0122095c05b1e\" returns successfully" Mar 17 17:49:50.904006 kubelet[2013]: E0317 17:49:50.903932 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:51.275152 kubelet[2013]: I0317 17:49:51.274946 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx-deployment-85f456d6dd-brdvj" podStartSLOduration=7.265666652 podStartE2EDuration="12.274914919s" podCreationTimestamp="2025-03-17 17:49:39 +0000 UTC" firstStartedPulling="2025-03-17 17:49:45.302542597 +0000 UTC m=+20.899310068" lastFinishedPulling="2025-03-17 17:49:50.311790861 +0000 UTC m=+25.908558335" observedRunningTime="2025-03-17 17:49:51.274885571 +0000 UTC m=+26.871653081" watchObservedRunningTime="2025-03-17 17:49:51.274914919 +0000 UTC m=+26.871682389" Mar 17 17:49:51.696931 sshd-session[3370]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.183 user=root Mar 17 17:49:51.816615 containerd[1626]: time="2025-03-17T17:49:51.816541554Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:51.817928 containerd[1626]: time="2025-03-17T17:49:51.817666130Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.29.2: active requests=0, bytes read=7909887" Mar 17 17:49:51.818698 containerd[1626]: time="2025-03-17T17:49:51.818635421Z" level=info msg="ImageCreate event name:\"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:51.821559 containerd[1626]: time="2025-03-17T17:49:51.821513768Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:51.822706 containerd[1626]: time="2025-03-17T17:49:51.822550194Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.29.2\" with image id \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\", repo tag \"ghcr.io/flatcar/calico/csi:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:214b4eef7008808bda55ad3cc1d4a3cd8df9e0e8094dff213fa3241104eb892c\", size \"9402991\" in 1.50988342s" Mar 17 17:49:51.822706 containerd[1626]: time="2025-03-17T17:49:51.822600545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.29.2\" returns image reference \"sha256:0fae09f861e350c042fe0db9ce9f8cc5ac4df975a5c4e4a9ddc3c6fac1552a9a\"" Mar 17 17:49:51.825870 containerd[1626]: time="2025-03-17T17:49:51.825839226Z" level=info msg="CreateContainer within sandbox \"f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Mar 17 17:49:51.851168 containerd[1626]: time="2025-03-17T17:49:51.851087784Z" level=info msg="CreateContainer within sandbox \"f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"81ac939a9d147abfd8c14c0797177aa452488fc4cb427689e80ac65bfc5c1700\"" Mar 17 17:49:51.851969 containerd[1626]: time="2025-03-17T17:49:51.851899544Z" level=info msg="StartContainer for \"81ac939a9d147abfd8c14c0797177aa452488fc4cb427689e80ac65bfc5c1700\"" Mar 17 17:49:51.905040 kubelet[2013]: E0317 17:49:51.904953 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:51.945508 containerd[1626]: time="2025-03-17T17:49:51.945383306Z" level=info msg="StartContainer for \"81ac939a9d147abfd8c14c0797177aa452488fc4cb427689e80ac65bfc5c1700\" returns successfully" Mar 17 17:49:51.949440 containerd[1626]: time="2025-03-17T17:49:51.948672127Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\"" Mar 17 17:49:52.905839 kubelet[2013]: E0317 17:49:52.905773 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:53.620183 sshd[3295]: PAM: Permission denied for root from 218.92.0.183 Mar 17 17:49:53.624344 containerd[1626]: time="2025-03-17T17:49:53.624287708Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:53.625507 containerd[1626]: time="2025-03-17T17:49:53.625455342Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2: active requests=0, bytes read=13986843" Mar 17 17:49:53.626703 containerd[1626]: time="2025-03-17T17:49:53.626615345Z" level=info msg="ImageCreate event name:\"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:53.627765 containerd[1626]: time="2025-03-17T17:49:53.627649609Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:49:53.628523 containerd[1626]: time="2025-03-17T17:49:53.628374440Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" with image id \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:54ef0afa50feb3f691782e8d6df9a7f27d127a3af9bbcbd0bcdadac98e8be8e3\", size \"15479899\" in 1.679632169s" Mar 17 17:49:53.628523 containerd[1626]: time="2025-03-17T17:49:53.628410519Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.29.2\" returns image reference \"sha256:09a5a6ea58a48ac826468e05538c78d1378e103737124f1744efea8699fc29a8\"" Mar 17 17:49:53.631996 containerd[1626]: time="2025-03-17T17:49:53.631730914Z" level=info msg="CreateContainer within sandbox \"f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Mar 17 17:49:53.651616 containerd[1626]: time="2025-03-17T17:49:53.651555864Z" level=info msg="CreateContainer within sandbox \"f3cb43e4310f0f6636fde3da90985ff42de4da19990aa4db456cf6305fb029d6\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"18b1fad14e79980161175873aef6b9760089539aadc23e65aa5ba136bd797109\"" Mar 17 17:49:53.652722 containerd[1626]: time="2025-03-17T17:49:53.652472053Z" level=info msg="StartContainer for \"18b1fad14e79980161175873aef6b9760089539aadc23e65aa5ba136bd797109\"" Mar 17 17:49:53.729550 containerd[1626]: time="2025-03-17T17:49:53.729386667Z" level=info msg="StartContainer for \"18b1fad14e79980161175873aef6b9760089539aadc23e65aa5ba136bd797109\" returns successfully" Mar 17 17:49:53.907084 kubelet[2013]: E0317 17:49:53.906913 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:53.999079 kubelet[2013]: I0317 17:49:53.999015 2013 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Mar 17 17:49:53.999079 kubelet[2013]: I0317 17:49:53.999070 2013 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Mar 17 17:49:54.257035 sshd-session[3460]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.183 user=root Mar 17 17:49:54.907122 kubelet[2013]: E0317 17:49:54.907072 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:55.085795 update_engine[1604]: I20250317 17:49:55.084983 1604 update_attempter.cc:509] Updating boot flags... Mar 17 17:49:55.127872 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3476) Mar 17 17:49:55.214817 kernel: BTRFS warning: duplicate device /dev/vda3 devid 1 generation 40 scanned by (udev-worker) (3465) Mar 17 17:49:55.908357 kubelet[2013]: E0317 17:49:55.908292 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:56.124369 sshd[3295]: PAM: Permission denied for root from 218.92.0.183 Mar 17 17:49:56.756296 sshd-session[3483]: pam_unix(sshd:auth): authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=218.92.0.183 user=root Mar 17 17:49:56.908812 kubelet[2013]: E0317 17:49:56.908748 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:57.909394 kubelet[2013]: E0317 17:49:57.909326 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:58.367884 sshd[3295]: PAM: Permission denied for root from 218.92.0.183 Mar 17 17:49:58.505848 sshd[3295]: Received disconnect from 218.92.0.183 port 48410:11: [preauth] Mar 17 17:49:58.505848 sshd[3295]: Disconnected from authenticating user root 218.92.0.183 port 48410 [preauth] Mar 17 17:49:58.510822 systemd[1]: sshd@9-209.38.135.89:22-218.92.0.183:48410.service: Deactivated successfully. Mar 17 17:49:58.909956 kubelet[2013]: E0317 17:49:58.909898 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:49:59.910477 kubelet[2013]: E0317 17:49:59.910407 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:00.911176 kubelet[2013]: E0317 17:50:00.911100 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:01.039535 kubelet[2013]: I0317 17:50:01.039449 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-wnlj5" podStartSLOduration=28.154716811 podStartE2EDuration="36.039432099s" podCreationTimestamp="2025-03-17 17:49:25 +0000 UTC" firstStartedPulling="2025-03-17 17:49:45.745330092 +0000 UTC m=+21.342097575" lastFinishedPulling="2025-03-17 17:49:53.630045392 +0000 UTC m=+29.226812863" observedRunningTime="2025-03-17 17:49:54.32416291 +0000 UTC m=+29.920930400" watchObservedRunningTime="2025-03-17 17:50:01.039432099 +0000 UTC m=+36.636199590" Mar 17 17:50:01.039885 kubelet[2013]: I0317 17:50:01.039664 2013 topology_manager.go:215] "Topology Admit Handler" podUID="dc0ee37c-0db8-4a2b-a57d-04fd8a963334" podNamespace="default" podName="nfs-server-provisioner-0" Mar 17 17:50:01.098637 kubelet[2013]: I0317 17:50:01.098575 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/dc0ee37c-0db8-4a2b-a57d-04fd8a963334-data\") pod \"nfs-server-provisioner-0\" (UID: \"dc0ee37c-0db8-4a2b-a57d-04fd8a963334\") " pod="default/nfs-server-provisioner-0" Mar 17 17:50:01.098912 kubelet[2013]: I0317 17:50:01.098664 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz72f\" (UniqueName: \"kubernetes.io/projected/dc0ee37c-0db8-4a2b-a57d-04fd8a963334-kube-api-access-jz72f\") pod \"nfs-server-provisioner-0\" (UID: \"dc0ee37c-0db8-4a2b-a57d-04fd8a963334\") " pod="default/nfs-server-provisioner-0" Mar 17 17:50:01.345127 containerd[1626]: time="2025-03-17T17:50:01.344817641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dc0ee37c-0db8-4a2b-a57d-04fd8a963334,Namespace:default,Attempt:0,}" Mar 17 17:50:01.829728 systemd-networkd[1222]: cali60e51b789ff: Link UP Mar 17 17:50:01.830618 systemd-networkd[1222]: cali60e51b789ff: Gained carrier Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.492 [INFO][3500] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {209.38.135.89-k8s-nfs--server--provisioner--0-eth0 nfs-server-provisioner- default dc0ee37c-0db8-4a2b-a57d-04fd8a963334 1271 0 2025-03-17 17:50:01 +0000 UTC map[app:nfs-server-provisioner apps.kubernetes.io/pod-index:0 chart:nfs-server-provisioner-1.8.0 controller-revision-hash:nfs-server-provisioner-d5cbb7f57 heritage:Helm projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:nfs-server-provisioner release:nfs-server-provisioner statefulset.kubernetes.io/pod-name:nfs-server-provisioner-0] map[] [] [] []} {k8s 209.38.135.89 nfs-server-provisioner-0 eth0 nfs-server-provisioner [] [] [kns.default ksa.default.nfs-server-provisioner] cali60e51b789ff [{nfs TCP 2049 0 } {nfs-udp UDP 2049 0 } {nlockmgr TCP 32803 0 } {nlockmgr-udp UDP 32803 0 } {mountd TCP 20048 0 } {mountd-udp UDP 20048 0 } {rquotad TCP 875 0 } {rquotad-udp UDP 875 0 } {rpcbind TCP 111 0 } {rpcbind-udp UDP 111 0 } {statd TCP 662 0 } {statd-udp UDP 662 0 }] []}} ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.135.89-k8s-nfs--server--provisioner--0-" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.493 [INFO][3500] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.546 [INFO][3512] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" HandleID="k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Workload="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.582 [INFO][3512] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" HandleID="k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Workload="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000334d50), Attrs:map[string]string{"namespace":"default", "node":"209.38.135.89", "pod":"nfs-server-provisioner-0", "timestamp":"2025-03-17 17:50:01.546377902 +0000 UTC"}, Hostname:"209.38.135.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.583 [INFO][3512] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.583 [INFO][3512] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.583 [INFO][3512] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '209.38.135.89' Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.591 [INFO][3512] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.622 [INFO][3512] ipam/ipam.go 372: Looking up existing affinities for host host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.655 [INFO][3512] ipam/ipam.go 489: Trying affinity for 192.168.48.0/26 host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.669 [INFO][3512] ipam/ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.717 [INFO][3512] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.718 [INFO][3512] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.734 [INFO][3512] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139 Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.769 [INFO][3512] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.818 [INFO][3512] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.48.3/26] block=192.168.48.0/26 handle="k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.818 [INFO][3512] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.3/26] handle="k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" host="209.38.135.89" Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.818 [INFO][3512] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:50:01.885098 containerd[1626]: 2025-03-17 17:50:01.818 [INFO][3512] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.3/26] IPv6=[] ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" HandleID="k8s-pod-network.f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Workload="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:01.886280 containerd[1626]: 2025-03-17 17:50:01.821 [INFO][3500] cni-plugin/k8s.go 386: Populated endpoint ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.135.89-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"dc0ee37c-0db8-4a2b-a57d-04fd8a963334", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.135.89", ContainerID:"", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:50:01.886280 containerd[1626]: 2025-03-17 17:50:01.821 [INFO][3500] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.48.3/32] ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:01.886280 containerd[1626]: 2025-03-17 17:50:01.821 [INFO][3500] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali60e51b789ff ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:01.886280 containerd[1626]: 2025-03-17 17:50:01.831 [INFO][3500] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:01.886578 containerd[1626]: 2025-03-17 17:50:01.833 [INFO][3500] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.135.89-k8s-nfs--server--provisioner--0-eth0", GenerateName:"nfs-server-provisioner-", Namespace:"default", SelfLink:"", UID:"dc0ee37c-0db8-4a2b-a57d-04fd8a963334", ResourceVersion:"1271", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 1, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app":"nfs-server-provisioner", "apps.kubernetes.io/pod-index":"0", "chart":"nfs-server-provisioner-1.8.0", "controller-revision-hash":"nfs-server-provisioner-d5cbb7f57", "heritage":"Helm", "projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"nfs-server-provisioner", "release":"nfs-server-provisioner", "statefulset.kubernetes.io/pod-name":"nfs-server-provisioner-0"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.135.89", ContainerID:"f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139", Pod:"nfs-server-provisioner-0", Endpoint:"eth0", ServiceAccountName:"nfs-server-provisioner", IPNetworks:[]string{"192.168.48.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.nfs-server-provisioner"}, InterfaceName:"cali60e51b789ff", MAC:"3e:aa:3c:ee:2e:a2", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"nfs", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nfs-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x801, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"nlockmgr-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x8023, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"mountd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x4e50, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rquotad-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x36b, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"rpcbind-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x6f, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x296, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"statd-udp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x296, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:50:01.886578 containerd[1626]: 2025-03-17 17:50:01.881 [INFO][3500] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139" Namespace="default" Pod="nfs-server-provisioner-0" WorkloadEndpoint="209.38.135.89-k8s-nfs--server--provisioner--0-eth0" Mar 17 17:50:01.911825 kubelet[2013]: E0317 17:50:01.911739 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:01.939831 containerd[1626]: time="2025-03-17T17:50:01.939390711Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:01.939831 containerd[1626]: time="2025-03-17T17:50:01.939475009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:01.939831 containerd[1626]: time="2025-03-17T17:50:01.939491410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:01.939831 containerd[1626]: time="2025-03-17T17:50:01.939637037Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:02.050569 containerd[1626]: time="2025-03-17T17:50:02.050228623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:dc0ee37c-0db8-4a2b-a57d-04fd8a963334,Namespace:default,Attempt:0,} returns sandbox id \"f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139\"" Mar 17 17:50:02.055173 containerd[1626]: time="2025-03-17T17:50:02.055111920Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Mar 17 17:50:02.912712 kubelet[2013]: E0317 17:50:02.912602 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:03.561457 systemd-networkd[1222]: cali60e51b789ff: Gained IPv6LL Mar 17 17:50:03.913761 kubelet[2013]: E0317 17:50:03.913646 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:04.813556 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3833434710.mount: Deactivated successfully. Mar 17 17:50:04.876420 kubelet[2013]: E0317 17:50:04.876335 2013 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:04.914441 kubelet[2013]: E0317 17:50:04.914202 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:05.915403 kubelet[2013]: E0317 17:50:05.915260 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:06.919024 kubelet[2013]: E0317 17:50:06.916877 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:07.591275 containerd[1626]: time="2025-03-17T17:50:07.591202725Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:07.614554 containerd[1626]: time="2025-03-17T17:50:07.614473569Z" level=info msg="stop pulling image registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8: active requests=0, bytes read=91039406" Mar 17 17:50:07.616425 containerd[1626]: time="2025-03-17T17:50:07.616321869Z" level=info msg="ImageCreate event name:\"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:07.634665 containerd[1626]: time="2025-03-17T17:50:07.634431980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:07.636574 containerd[1626]: time="2025-03-17T17:50:07.636348040Z" level=info msg="Pulled image \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" with image id \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\", repo tag \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\", repo digest \"registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d\", size \"91036984\" in 5.580878064s" Mar 17 17:50:07.636574 containerd[1626]: time="2025-03-17T17:50:07.636412703Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Mar 17 17:50:07.641786 containerd[1626]: time="2025-03-17T17:50:07.641393251Z" level=info msg="CreateContainer within sandbox \"f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Mar 17 17:50:07.684972 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4067903923.mount: Deactivated successfully. Mar 17 17:50:07.689919 containerd[1626]: time="2025-03-17T17:50:07.689857491Z" level=info msg="CreateContainer within sandbox \"f002e15eecaa12e8a62b4fe8d8fd6309746fccb2517f82cb691f4f61aa990139\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"87bfb85d6b831cef4b7eef2c2f141e6d9ce8beb384ab3606d2b31219ab357e54\"" Mar 17 17:50:07.691718 containerd[1626]: time="2025-03-17T17:50:07.690713842Z" level=info msg="StartContainer for \"87bfb85d6b831cef4b7eef2c2f141e6d9ce8beb384ab3606d2b31219ab357e54\"" Mar 17 17:50:07.821051 containerd[1626]: time="2025-03-17T17:50:07.820958595Z" level=info msg="StartContainer for \"87bfb85d6b831cef4b7eef2c2f141e6d9ce8beb384ab3606d2b31219ab357e54\" returns successfully" Mar 17 17:50:07.917328 kubelet[2013]: E0317 17:50:07.917052 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:08.919036 kubelet[2013]: E0317 17:50:08.918948 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:09.920184 kubelet[2013]: E0317 17:50:09.920123 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:10.920829 kubelet[2013]: E0317 17:50:10.920735 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:11.921008 kubelet[2013]: E0317 17:50:11.920926 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:12.921997 kubelet[2013]: E0317 17:50:12.921932 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:13.922556 kubelet[2013]: E0317 17:50:13.922490 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:14.922960 kubelet[2013]: E0317 17:50:14.922903 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:15.647337 kubelet[2013]: E0317 17:50:15.647228 2013 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 67.207.67.2 67.207.67.3 67.207.67.2" Mar 17 17:50:15.760426 kubelet[2013]: I0317 17:50:15.760055 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=9.175008921 podStartE2EDuration="14.760030782s" podCreationTimestamp="2025-03-17 17:50:01 +0000 UTC" firstStartedPulling="2025-03-17 17:50:02.053133996 +0000 UTC m=+37.649901471" lastFinishedPulling="2025-03-17 17:50:07.638155844 +0000 UTC m=+43.234923332" observedRunningTime="2025-03-17 17:50:08.35710846 +0000 UTC m=+43.953875954" watchObservedRunningTime="2025-03-17 17:50:15.760030782 +0000 UTC m=+51.356798280" Mar 17 17:50:15.923791 kubelet[2013]: E0317 17:50:15.923239 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:16.923865 kubelet[2013]: E0317 17:50:16.923772 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:17.924826 kubelet[2013]: E0317 17:50:17.924752 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:18.925942 kubelet[2013]: E0317 17:50:18.925876 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:19.926509 kubelet[2013]: E0317 17:50:19.926441 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:20.927058 kubelet[2013]: E0317 17:50:20.926993 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:21.927910 kubelet[2013]: E0317 17:50:21.927843 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:22.928759 kubelet[2013]: E0317 17:50:22.928671 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:23.929708 kubelet[2013]: E0317 17:50:23.929605 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:24.876695 kubelet[2013]: E0317 17:50:24.876618 2013 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:24.907119 containerd[1626]: time="2025-03-17T17:50:24.906813703Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" Mar 17 17:50:24.907796 containerd[1626]: time="2025-03-17T17:50:24.907092185Z" level=info msg="TearDown network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" successfully" Mar 17 17:50:24.907796 containerd[1626]: time="2025-03-17T17:50:24.907776406Z" level=info msg="StopPodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" returns successfully" Mar 17 17:50:24.914260 containerd[1626]: time="2025-03-17T17:50:24.912765777Z" level=info msg="RemovePodSandbox for \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" Mar 17 17:50:24.921935 containerd[1626]: time="2025-03-17T17:50:24.921298798Z" level=info msg="Forcibly stopping sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\"" Mar 17 17:50:24.929948 kubelet[2013]: E0317 17:50:24.929811 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:24.935347 containerd[1626]: time="2025-03-17T17:50:24.921509041Z" level=info msg="TearDown network for sandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" successfully" Mar 17 17:50:24.957574 containerd[1626]: time="2025-03-17T17:50:24.956972755Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:24.957574 containerd[1626]: time="2025-03-17T17:50:24.957184757Z" level=info msg="RemovePodSandbox \"9c01448a5fbd1a22d5d85df914a2e7839b6b1affd93d9c8cf96d76139488f6cd\" returns successfully" Mar 17 17:50:24.958346 containerd[1626]: time="2025-03-17T17:50:24.958297781Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\"" Mar 17 17:50:24.958756 containerd[1626]: time="2025-03-17T17:50:24.958463012Z" level=info msg="TearDown network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" successfully" Mar 17 17:50:24.958756 containerd[1626]: time="2025-03-17T17:50:24.958483575Z" level=info msg="StopPodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" returns successfully" Mar 17 17:50:24.962448 containerd[1626]: time="2025-03-17T17:50:24.960939001Z" level=info msg="RemovePodSandbox for \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\"" Mar 17 17:50:24.962448 containerd[1626]: time="2025-03-17T17:50:24.960998281Z" level=info msg="Forcibly stopping sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\"" Mar 17 17:50:24.962448 containerd[1626]: time="2025-03-17T17:50:24.961156420Z" level=info msg="TearDown network for sandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" successfully" Mar 17 17:50:24.971161 containerd[1626]: time="2025-03-17T17:50:24.971084324Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:24.971336 containerd[1626]: time="2025-03-17T17:50:24.971181344Z" level=info msg="RemovePodSandbox \"bb851b0303b62547e8f3152dae51034b42c99858145bc0886ee21df01814ef2d\" returns successfully" Mar 17 17:50:24.973043 containerd[1626]: time="2025-03-17T17:50:24.972992531Z" level=info msg="StopPodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\"" Mar 17 17:50:24.973260 containerd[1626]: time="2025-03-17T17:50:24.973222167Z" level=info msg="TearDown network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" successfully" Mar 17 17:50:24.973260 containerd[1626]: time="2025-03-17T17:50:24.973245897Z" level=info msg="StopPodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" returns successfully" Mar 17 17:50:24.974509 containerd[1626]: time="2025-03-17T17:50:24.973735218Z" level=info msg="RemovePodSandbox for \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\"" Mar 17 17:50:24.974509 containerd[1626]: time="2025-03-17T17:50:24.973768192Z" level=info msg="Forcibly stopping sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\"" Mar 17 17:50:24.974509 containerd[1626]: time="2025-03-17T17:50:24.973847662Z" level=info msg="TearDown network for sandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" successfully" Mar 17 17:50:24.976412 containerd[1626]: time="2025-03-17T17:50:24.976369115Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:24.976599 containerd[1626]: time="2025-03-17T17:50:24.976583609Z" level=info msg="RemovePodSandbox \"3320ddeea553847468a17fd5eced217ae8606c21980cc2896458ebba8db8c9a1\" returns successfully" Mar 17 17:50:24.977400 containerd[1626]: time="2025-03-17T17:50:24.977361005Z" level=info msg="StopPodSandbox for \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\"" Mar 17 17:50:24.977538 containerd[1626]: time="2025-03-17T17:50:24.977502523Z" level=info msg="TearDown network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\" successfully" Mar 17 17:50:24.977538 containerd[1626]: time="2025-03-17T17:50:24.977520625Z" level=info msg="StopPodSandbox for \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\" returns successfully" Mar 17 17:50:24.978113 containerd[1626]: time="2025-03-17T17:50:24.978079574Z" level=info msg="RemovePodSandbox for \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\"" Mar 17 17:50:24.978206 containerd[1626]: time="2025-03-17T17:50:24.978122327Z" level=info msg="Forcibly stopping sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\"" Mar 17 17:50:24.978274 containerd[1626]: time="2025-03-17T17:50:24.978222690Z" level=info msg="TearDown network for sandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\" successfully" Mar 17 17:50:24.981042 containerd[1626]: time="2025-03-17T17:50:24.980990007Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:24.981227 containerd[1626]: time="2025-03-17T17:50:24.981075107Z" level=info msg="RemovePodSandbox \"6107e955a001d5f275142d3b4c295994b23a7a40d83f107d272a0f95dc24ef71\" returns successfully" Mar 17 17:50:24.981803 containerd[1626]: time="2025-03-17T17:50:24.981776447Z" level=info msg="StopPodSandbox for \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\"" Mar 17 17:50:24.982177 containerd[1626]: time="2025-03-17T17:50:24.982055965Z" level=info msg="TearDown network for sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\" successfully" Mar 17 17:50:24.982177 containerd[1626]: time="2025-03-17T17:50:24.982071263Z" level=info msg="StopPodSandbox for \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\" returns successfully" Mar 17 17:50:24.982537 containerd[1626]: time="2025-03-17T17:50:24.982506425Z" level=info msg="RemovePodSandbox for \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\"" Mar 17 17:50:24.982574 containerd[1626]: time="2025-03-17T17:50:24.982548424Z" level=info msg="Forcibly stopping sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\"" Mar 17 17:50:24.982724 containerd[1626]: time="2025-03-17T17:50:24.982648569Z" level=info msg="TearDown network for sandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\" successfully" Mar 17 17:50:24.986552 containerd[1626]: time="2025-03-17T17:50:24.986504332Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:24.986867 containerd[1626]: time="2025-03-17T17:50:24.986580889Z" level=info msg="RemovePodSandbox \"02648affe711c228442cdb86ac6266b7f69997d68e8759e56b4903003e65e8f2\" returns successfully" Mar 17 17:50:24.987344 containerd[1626]: time="2025-03-17T17:50:24.987170147Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:50:24.987344 containerd[1626]: time="2025-03-17T17:50:24.987272582Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:50:24.987344 containerd[1626]: time="2025-03-17T17:50:24.987288753Z" level=info msg="StopPodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:50:24.987888 containerd[1626]: time="2025-03-17T17:50:24.987789703Z" level=info msg="RemovePodSandbox for \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:50:24.987888 containerd[1626]: time="2025-03-17T17:50:24.987821337Z" level=info msg="Forcibly stopping sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\"" Mar 17 17:50:24.987990 containerd[1626]: time="2025-03-17T17:50:24.987900357Z" level=info msg="TearDown network for sandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" successfully" Mar 17 17:50:24.990143 containerd[1626]: time="2025-03-17T17:50:24.989956802Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:24.990143 containerd[1626]: time="2025-03-17T17:50:24.990019298Z" level=info msg="RemovePodSandbox \"5dd245f18b5f351e60acb793ac153ef24d3235d2f4be72a43681dff697183d84\" returns successfully" Mar 17 17:50:24.991157 containerd[1626]: time="2025-03-17T17:50:24.991010585Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:50:24.991338 containerd[1626]: time="2025-03-17T17:50:24.991160047Z" level=info msg="TearDown network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" successfully" Mar 17 17:50:24.991338 containerd[1626]: time="2025-03-17T17:50:24.991177002Z" level=info msg="StopPodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" returns successfully" Mar 17 17:50:24.991874 containerd[1626]: time="2025-03-17T17:50:24.991628652Z" level=info msg="RemovePodSandbox for \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:50:24.991874 containerd[1626]: time="2025-03-17T17:50:24.991657569Z" level=info msg="Forcibly stopping sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\"" Mar 17 17:50:24.991874 containerd[1626]: time="2025-03-17T17:50:24.991746039Z" level=info msg="TearDown network for sandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" successfully" Mar 17 17:50:24.994646 containerd[1626]: time="2025-03-17T17:50:24.994594732Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:24.994834 containerd[1626]: time="2025-03-17T17:50:24.994672521Z" level=info msg="RemovePodSandbox \"f8c63b85f24831b6f0677980a6030ec68aa200b8e64fc798d2aeab13aaa84905\" returns successfully" Mar 17 17:50:24.995575 containerd[1626]: time="2025-03-17T17:50:24.995183079Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" Mar 17 17:50:24.995575 containerd[1626]: time="2025-03-17T17:50:24.995310249Z" level=info msg="TearDown network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" successfully" Mar 17 17:50:24.995575 containerd[1626]: time="2025-03-17T17:50:24.995321214Z" level=info msg="StopPodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" returns successfully" Mar 17 17:50:24.995754 containerd[1626]: time="2025-03-17T17:50:24.995702242Z" level=info msg="RemovePodSandbox for \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" Mar 17 17:50:24.995754 containerd[1626]: time="2025-03-17T17:50:24.995736255Z" level=info msg="Forcibly stopping sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\"" Mar 17 17:50:24.995874 containerd[1626]: time="2025-03-17T17:50:24.995820344Z" level=info msg="TearDown network for sandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" successfully" Mar 17 17:50:24.998121 containerd[1626]: time="2025-03-17T17:50:24.998077194Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:24.998228 containerd[1626]: time="2025-03-17T17:50:24.998153127Z" level=info msg="RemovePodSandbox \"a2d9cd4d49c2a2a4d1b3a0ea1b8cedd69ebb23f29174a4221881e009a2eb9a3d\" returns successfully" Mar 17 17:50:24.998790 containerd[1626]: time="2025-03-17T17:50:24.998753201Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\"" Mar 17 17:50:24.998903 containerd[1626]: time="2025-03-17T17:50:24.998883530Z" level=info msg="TearDown network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" successfully" Mar 17 17:50:24.998980 containerd[1626]: time="2025-03-17T17:50:24.998905214Z" level=info msg="StopPodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" returns successfully" Mar 17 17:50:24.999320 containerd[1626]: time="2025-03-17T17:50:24.999291501Z" level=info msg="RemovePodSandbox for \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\"" Mar 17 17:50:24.999391 containerd[1626]: time="2025-03-17T17:50:24.999327771Z" level=info msg="Forcibly stopping sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\"" Mar 17 17:50:24.999602 containerd[1626]: time="2025-03-17T17:50:24.999419134Z" level=info msg="TearDown network for sandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" successfully" Mar 17 17:50:25.001989 containerd[1626]: time="2025-03-17T17:50:25.001927405Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:25.002204 containerd[1626]: time="2025-03-17T17:50:25.002004885Z" level=info msg="RemovePodSandbox \"94e7fb3221b6dafbbd608123c924dd4d6427295dc1e07e8797415046a4047104\" returns successfully" Mar 17 17:50:25.003155 containerd[1626]: time="2025-03-17T17:50:25.002798251Z" level=info msg="StopPodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\"" Mar 17 17:50:25.003155 containerd[1626]: time="2025-03-17T17:50:25.002940680Z" level=info msg="TearDown network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" successfully" Mar 17 17:50:25.003155 containerd[1626]: time="2025-03-17T17:50:25.002956483Z" level=info msg="StopPodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" returns successfully" Mar 17 17:50:25.004006 containerd[1626]: time="2025-03-17T17:50:25.003752621Z" level=info msg="RemovePodSandbox for \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\"" Mar 17 17:50:25.004006 containerd[1626]: time="2025-03-17T17:50:25.003819648Z" level=info msg="Forcibly stopping sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\"" Mar 17 17:50:25.004383 containerd[1626]: time="2025-03-17T17:50:25.003976553Z" level=info msg="TearDown network for sandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" successfully" Mar 17 17:50:25.008450 containerd[1626]: time="2025-03-17T17:50:25.008184441Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:25.008450 containerd[1626]: time="2025-03-17T17:50:25.008259644Z" level=info msg="RemovePodSandbox \"21d9838daf93b227721d0ea4f7ac0ab6df5fa12fc54ddb1601eba3b0a7ffd08e\" returns successfully" Mar 17 17:50:25.008832 containerd[1626]: time="2025-03-17T17:50:25.008752058Z" level=info msg="StopPodSandbox for \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\"" Mar 17 17:50:25.009076 containerd[1626]: time="2025-03-17T17:50:25.009039468Z" level=info msg="TearDown network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\" successfully" Mar 17 17:50:25.009076 containerd[1626]: time="2025-03-17T17:50:25.009066182Z" level=info msg="StopPodSandbox for \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\" returns successfully" Mar 17 17:50:25.009567 containerd[1626]: time="2025-03-17T17:50:25.009535754Z" level=info msg="RemovePodSandbox for \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\"" Mar 17 17:50:25.009623 containerd[1626]: time="2025-03-17T17:50:25.009573879Z" level=info msg="Forcibly stopping sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\"" Mar 17 17:50:25.009775 containerd[1626]: time="2025-03-17T17:50:25.009671711Z" level=info msg="TearDown network for sandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\" successfully" Mar 17 17:50:25.014317 containerd[1626]: time="2025-03-17T17:50:25.014108491Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:25.014317 containerd[1626]: time="2025-03-17T17:50:25.014244183Z" level=info msg="RemovePodSandbox \"017c3c8c364aa163f5cc75f207e60a30d89c1ab42204a9232fff12ac481a1eef\" returns successfully" Mar 17 17:50:25.016882 containerd[1626]: time="2025-03-17T17:50:25.016839336Z" level=info msg="StopPodSandbox for \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\"" Mar 17 17:50:25.017325 containerd[1626]: time="2025-03-17T17:50:25.017288787Z" level=info msg="TearDown network for sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\" successfully" Mar 17 17:50:25.017469 containerd[1626]: time="2025-03-17T17:50:25.017449508Z" level=info msg="StopPodSandbox for \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\" returns successfully" Mar 17 17:50:25.018207 containerd[1626]: time="2025-03-17T17:50:25.018168210Z" level=info msg="RemovePodSandbox for \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\"" Mar 17 17:50:25.018403 containerd[1626]: time="2025-03-17T17:50:25.018375344Z" level=info msg="Forcibly stopping sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\"" Mar 17 17:50:25.018665 containerd[1626]: time="2025-03-17T17:50:25.018594648Z" level=info msg="TearDown network for sandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\" successfully" Mar 17 17:50:25.022292 containerd[1626]: time="2025-03-17T17:50:25.022176927Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Mar 17 17:50:25.022622 containerd[1626]: time="2025-03-17T17:50:25.022581225Z" level=info msg="RemovePodSandbox \"b93ff61176bdc9c27316d7b56f51e45a0c340d2db17c0991c99ff1e4280bf960\" returns successfully" Mar 17 17:50:25.930405 kubelet[2013]: E0317 17:50:25.930326 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:26.930866 kubelet[2013]: E0317 17:50:26.930810 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:27.931281 kubelet[2013]: E0317 17:50:27.931195 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:28.932158 kubelet[2013]: E0317 17:50:28.932069 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:29.933103 kubelet[2013]: E0317 17:50:29.932998 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:30.934013 kubelet[2013]: E0317 17:50:30.933917 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:31.934841 kubelet[2013]: E0317 17:50:31.934753 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:32.935518 kubelet[2013]: E0317 17:50:32.935428 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:33.936718 kubelet[2013]: E0317 17:50:33.936632 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:34.937287 kubelet[2013]: E0317 17:50:34.937211 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:35.938294 kubelet[2013]: E0317 17:50:35.938227 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:36.902066 kubelet[2013]: I0317 17:50:36.901931 2013 topology_manager.go:215] "Topology Admit Handler" podUID="829729e4-6292-4fe0-aefb-db344efb41c4" podNamespace="default" podName="test-pod-1" Mar 17 17:50:36.939393 kubelet[2013]: E0317 17:50:36.939338 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:36.973804 kubelet[2013]: I0317 17:50:36.973716 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-5a006fe5-1506-479c-84d6-086a528ad2cd\" (UniqueName: \"kubernetes.io/nfs/829729e4-6292-4fe0-aefb-db344efb41c4-pvc-5a006fe5-1506-479c-84d6-086a528ad2cd\") pod \"test-pod-1\" (UID: \"829729e4-6292-4fe0-aefb-db344efb41c4\") " pod="default/test-pod-1" Mar 17 17:50:36.973804 kubelet[2013]: I0317 17:50:36.973789 2013 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mr8qm\" (UniqueName: \"kubernetes.io/projected/829729e4-6292-4fe0-aefb-db344efb41c4-kube-api-access-mr8qm\") pod \"test-pod-1\" (UID: \"829729e4-6292-4fe0-aefb-db344efb41c4\") " pod="default/test-pod-1" Mar 17 17:50:37.121023 kernel: FS-Cache: Loaded Mar 17 17:50:37.202034 kernel: RPC: Registered named UNIX socket transport module. Mar 17 17:50:37.202138 kernel: RPC: Registered udp transport module. Mar 17 17:50:37.202186 kernel: RPC: Registered tcp transport module. Mar 17 17:50:37.202790 kernel: RPC: Registered tcp-with-tls transport module. Mar 17 17:50:37.203791 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Mar 17 17:50:37.583097 kernel: NFS: Registering the id_resolver key type Mar 17 17:50:37.583263 kernel: Key type id_resolver registered Mar 17 17:50:37.585204 kernel: Key type id_legacy registered Mar 17 17:50:37.633971 nfsidmap[3741]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.2-e-40efa8f9ae' Mar 17 17:50:37.641388 nfsidmap[3742]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '2.2-e-40efa8f9ae' Mar 17 17:50:37.806730 containerd[1626]: time="2025-03-17T17:50:37.806283764Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:829729e4-6292-4fe0-aefb-db344efb41c4,Namespace:default,Attempt:0,}" Mar 17 17:50:37.939654 kubelet[2013]: E0317 17:50:37.939478 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:38.093476 systemd-networkd[1222]: cali5ec59c6bf6e: Link UP Mar 17 17:50:38.096581 systemd-networkd[1222]: cali5ec59c6bf6e: Gained carrier Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.874 [INFO][3745] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {209.38.135.89-k8s-test--pod--1-eth0 default 829729e4-6292-4fe0-aefb-db344efb41c4 1431 0 2025-03-17 17:50:02 +0000 UTC map[projectcalico.org/namespace:default projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:default] map[] [] [] []} {k8s 209.38.135.89 test-pod-1 eth0 default [] [] [kns.default ksa.default.default] cali5ec59c6bf6e [] []}} ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.135.89-k8s-test--pod--1-" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.874 [INFO][3745] cni-plugin/k8s.go 77: Extracted identifiers for CmdAddK8s ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.135.89-k8s-test--pod--1-eth0" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.936 [INFO][3756] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" HandleID="k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Workload="209.38.135.89-k8s-test--pod--1-eth0" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.963 [INFO][3756] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" HandleID="k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Workload="209.38.135.89-k8s-test--pod--1-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002788d0), Attrs:map[string]string{"namespace":"default", "node":"209.38.135.89", "pod":"test-pod-1", "timestamp":"2025-03-17 17:50:37.936772227 +0000 UTC"}, Hostname:"209.38.135.89", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.963 [INFO][3756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.963 [INFO][3756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.963 [INFO][3756] ipam/ipam.go 107: Auto-assign 1 ipv4, 0 ipv6 addrs for host '209.38.135.89' Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.973 [INFO][3756] ipam/ipam.go 660: Looking up existing affinities for host handle="k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:37.984 [INFO][3756] ipam/ipam.go 372: Looking up existing affinities for host host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.010 [INFO][3756] ipam/ipam.go 489: Trying affinity for 192.168.48.0/26 host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.028 [INFO][3756] ipam/ipam.go 155: Attempting to load block cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.035 [INFO][3756] ipam/ipam.go 232: Affinity is confirmed and block has been loaded cidr=192.168.48.0/26 host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.035 [INFO][3756] ipam/ipam.go 1180: Attempting to assign 1 addresses from block block=192.168.48.0/26 handle="k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.039 [INFO][3756] ipam/ipam.go 1685: Creating new handle: k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1 Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.055 [INFO][3756] ipam/ipam.go 1203: Writing block in order to claim IPs block=192.168.48.0/26 handle="k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.085 [INFO][3756] ipam/ipam.go 1216: Successfully claimed IPs: [192.168.48.4/26] block=192.168.48.0/26 handle="k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.086 [INFO][3756] ipam/ipam.go 847: Auto-assigned 1 out of 1 IPv4s: [192.168.48.4/26] handle="k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" host="209.38.135.89" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.086 [INFO][3756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.086 [INFO][3756] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.48.4/26] IPv6=[] ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" HandleID="k8s-pod-network.4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Workload="209.38.135.89-k8s-test--pod--1-eth0" Mar 17 17:50:38.116262 containerd[1626]: 2025-03-17 17:50:38.089 [INFO][3745] cni-plugin/k8s.go 386: Populated endpoint ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.135.89-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.135.89-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"829729e4-6292-4fe0-aefb-db344efb41c4", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.135.89", ContainerID:"", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:50:38.120259 containerd[1626]: 2025-03-17 17:50:38.089 [INFO][3745] cni-plugin/k8s.go 387: Calico CNI using IPs: [192.168.48.4/32] ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.135.89-k8s-test--pod--1-eth0" Mar 17 17:50:38.120259 containerd[1626]: 2025-03-17 17:50:38.089 [INFO][3745] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5ec59c6bf6e ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.135.89-k8s-test--pod--1-eth0" Mar 17 17:50:38.120259 containerd[1626]: 2025-03-17 17:50:38.096 [INFO][3745] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.135.89-k8s-test--pod--1-eth0" Mar 17 17:50:38.120259 containerd[1626]: 2025-03-17 17:50:38.097 [INFO][3745] cni-plugin/k8s.go 414: Added Mac, interface name, and active container ID to endpoint ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.135.89-k8s-test--pod--1-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"209.38.135.89-k8s-test--pod--1-eth0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"829729e4-6292-4fe0-aefb-db344efb41c4", ResourceVersion:"1431", Generation:0, CreationTimestamp:time.Date(2025, time.March, 17, 17, 50, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"projectcalico.org/namespace":"default", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"default"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"209.38.135.89", ContainerID:"4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1", Pod:"test-pod-1", Endpoint:"eth0", ServiceAccountName:"default", IPNetworks:[]string{"192.168.48.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.default", "ksa.default.default"}, InterfaceName:"cali5ec59c6bf6e", MAC:"82:29:5c:9f:0b:9e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil)}} Mar 17 17:50:38.120259 containerd[1626]: 2025-03-17 17:50:38.112 [INFO][3745] cni-plugin/k8s.go 500: Wrote updated endpoint to datastore ContainerID="4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1" Namespace="default" Pod="test-pod-1" WorkloadEndpoint="209.38.135.89-k8s-test--pod--1-eth0" Mar 17 17:50:38.164031 containerd[1626]: time="2025-03-17T17:50:38.163654409Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Mar 17 17:50:38.164031 containerd[1626]: time="2025-03-17T17:50:38.163790943Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Mar 17 17:50:38.164031 containerd[1626]: time="2025-03-17T17:50:38.163815806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:38.165120 containerd[1626]: time="2025-03-17T17:50:38.164752330Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Mar 17 17:50:38.201004 systemd[1]: run-containerd-runc-k8s.io-4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1-runc.t1urPa.mount: Deactivated successfully. Mar 17 17:50:38.271960 containerd[1626]: time="2025-03-17T17:50:38.271897622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:829729e4-6292-4fe0-aefb-db344efb41c4,Namespace:default,Attempt:0,} returns sandbox id \"4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1\"" Mar 17 17:50:38.276307 containerd[1626]: time="2025-03-17T17:50:38.275919903Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Mar 17 17:50:38.720707 containerd[1626]: time="2025-03-17T17:50:38.719663247Z" level=info msg="stop pulling image ghcr.io/flatcar/nginx:latest: active requests=0, bytes read=61" Mar 17 17:50:38.722748 containerd[1626]: time="2025-03-17T17:50:38.722652236Z" level=info msg="Pulled image \"ghcr.io/flatcar/nginx:latest\" with image id \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\", repo tag \"ghcr.io/flatcar/nginx:latest\", repo digest \"ghcr.io/flatcar/nginx@sha256:b927c62cc716b99bce51774b46a63feb63f5414c6f985fb80cacd1933bbd0e06\", size \"73060009\" in 446.685856ms" Mar 17 17:50:38.722952 containerd[1626]: time="2025-03-17T17:50:38.722933145Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:d25119ebd2aadc346788ac84ae0c5b1b018c687dcfd3167bb27e341f8b5caeee\"" Mar 17 17:50:38.727750 containerd[1626]: time="2025-03-17T17:50:38.727053639Z" level=info msg="ImageUpdate event name:\"ghcr.io/flatcar/nginx:latest\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Mar 17 17:50:38.728145 containerd[1626]: time="2025-03-17T17:50:38.728110026Z" level=info msg="CreateContainer within sandbox \"4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1\" for container &ContainerMetadata{Name:test,Attempt:0,}" Mar 17 17:50:38.746332 containerd[1626]: time="2025-03-17T17:50:38.746265120Z" level=info msg="CreateContainer within sandbox \"4fe1eb8bf23c1f6d232e58de081324d1505e4a0e69a950be68003f8bd91f66f1\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"e569b96e0332928af06c64e1fd17d05459a21d1a49471da5da616ab9b8fc06bb\"" Mar 17 17:50:38.747396 containerd[1626]: time="2025-03-17T17:50:38.747304572Z" level=info msg="StartContainer for \"e569b96e0332928af06c64e1fd17d05459a21d1a49471da5da616ab9b8fc06bb\"" Mar 17 17:50:38.833767 containerd[1626]: time="2025-03-17T17:50:38.833537121Z" level=info msg="StartContainer for \"e569b96e0332928af06c64e1fd17d05459a21d1a49471da5da616ab9b8fc06bb\" returns successfully" Mar 17 17:50:38.940423 kubelet[2013]: E0317 17:50:38.940343 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:39.208074 systemd-networkd[1222]: cali5ec59c6bf6e: Gained IPv6LL Mar 17 17:50:39.424387 kubelet[2013]: I0317 17:50:39.424092 2013 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=36.974506189 podStartE2EDuration="37.424070396s" podCreationTimestamp="2025-03-17 17:50:02 +0000 UTC" firstStartedPulling="2025-03-17 17:50:38.275315685 +0000 UTC m=+73.872083160" lastFinishedPulling="2025-03-17 17:50:38.724879883 +0000 UTC m=+74.321647367" observedRunningTime="2025-03-17 17:50:39.423538751 +0000 UTC m=+75.020306243" watchObservedRunningTime="2025-03-17 17:50:39.424070396 +0000 UTC m=+75.020837888" Mar 17 17:50:39.940575 kubelet[2013]: E0317 17:50:39.940504 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:40.941589 kubelet[2013]: E0317 17:50:40.941511 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:41.942466 kubelet[2013]: E0317 17:50:41.942390 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:42.943182 kubelet[2013]: E0317 17:50:42.943100 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:43.944152 kubelet[2013]: E0317 17:50:43.944091 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:44.876297 kubelet[2013]: E0317 17:50:44.876232 2013 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Mar 17 17:50:44.945310 kubelet[2013]: E0317 17:50:44.945241 2013 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"